pickle

Python Multiprocessing Lib Error (AttributeError: __exit__)

梦想的初衷 提交于 2019-12-17 07:24:03
问题 Am getting this error when using the pool.map(funct, iterable) : AttributeError: __exit__ No Explanation, only stack trace to the pool.py file within the module. using in this way: with Pool(processes=2) as pool: pool.map(myFunction, mylist) pool.map(myfunction2, mylist2) I suspect there could be a problem with the picklability (python needs to pickle , or transform list data into byte stream) yet I'm not sure if this is true or if it is how to debug. EDIT: new format of code that produces

Python serialization - Why pickle?

我的梦境 提交于 2019-12-17 04:40:50
问题 I understood that Python pickling is a way to 'store' a Python Object in a way that does respect Object programming - different from an output written in txt file or DB. Do you have more details or references on the following points: where are pickled objects 'stored'? why is pickling preserving object representation more than, say, storing in DB? can I retrieve pickled objects from one Python shell session to another? do you have significant examples when serialization is useful? does

Can Python pickle lambda functions?

偶尔善良 提交于 2019-12-17 03:07:55
问题 I have read in a number of threads that Python pickle / cPickle cannot pickle lambda functions. However the following code works, using Python 2.7.6: import cPickle as pickle if __name__ == "__main__": s = pickle.dumps(lambda x, y: x+y) f = pickle.loads(s) assert f(3,4) == 7 So what is going on? Or, rather, what is the limit of pickling lambdas? [EDIT] I think i know why this code runs. I forgot (sorry!) i am running stackless python, which has a form of micro-threads called tasklets

Multiprocessing: How to use Pool.map on a function defined in a class?

扶醉桌前 提交于 2019-12-17 02:02:57
问题 When I run something like: from multiprocessing import Pool p = Pool(5) def f(x): return x*x p.map(f, [1,2,3]) it works fine. However, putting this as a function of a class: class calculate(object): def run(self): def f(x): return x*x p = Pool() return p.map(f, [1,2,3]) cl = calculate() print cl.run() Gives me the following error: Exception in thread Thread-1: Traceback (most recent call last): File "/sw/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/sw/lib

Multiprocessing: How to use Pool.map on a function defined in a class?

余生长醉 提交于 2019-12-17 02:01:53
问题 When I run something like: from multiprocessing import Pool p = Pool(5) def f(x): return x*x p.map(f, [1,2,3]) it works fine. However, putting this as a function of a class: class calculate(object): def run(self): def f(x): return x*x p = Pool() return p.map(f, [1,2,3]) cl = calculate() print cl.run() Gives me the following error: Exception in thread Thread-1: Traceback (most recent call last): File "/sw/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/sw/lib

Pickling Pandas DataFrames subclasses which include metadata

不羁的心 提交于 2019-12-14 03:58:39
问题 The question about attaching metadata to Pandas objects, and getting that data to survive a pickle/unpickle process is a perennial one. I see some very old answers, which basically say that you can't. Hopefully, a more current answer to this question will be yes. I'm using Pandas 0.23.3. I've made some Pandas DataFrame subclasses. I think I know how to do this correctly. I have a _constructor method, and my __init__ method can handle BlockManager objects. When I create meta-data attributes, I

How can type 'SwigPyObject be registered using copy_reg.pickle in Python?

為{幸葍}努か 提交于 2019-12-14 03:48:44
问题 I have a class method which I would like to use with multiprocessing.Pool for parallelisation. As class instances are not pickleable, I have used the following: import copy_reg import types def _reduce_method(m): if m.im_self is None: return getattr, (m.im_class, m.im_func.func_name) else: return getattr, (m.im_self, m.im_func.func_name) copy_reg.pickle(types.MethodType, _reduce_method) This works no problem. However, within my class I use the GDAL module (https://pypi.org/project/GDAL/) for

Pickling from multiple threads in Python

纵然是瞬间 提交于 2019-12-14 03:45:50
问题 I have a python program with multiple threads. Each thread detects events, which I would like to store somewhere so that I can read them in again (for testing). Right now, I'm using Pickle to output the events, and each thread outputs to a different file. Ideally, I would only use one output file, and all the threads would write to it, but when I try this, it looks like the various threads try to write their output at the same time, and they don't get pickled properly. Is there a way to do

How to remove instancemethod objects, for the sake of pickle, without modifying the original class

╄→尐↘猪︶ㄣ 提交于 2019-12-13 22:01:06
问题 I want to persistantly hold on to an object from reverend.thomas.Bayes . Of course, if I try to pickle one of these classes directly, I get: TypeError: can't pickle instancemethod objects To work around this, I have tried declaring two functions: import types from itertools import chain from copy import copy from reverend.thomas import Bayes def prepare_bayes_for_pickle(bayes_obj): dic = copy(bayes_obj.__dict__) #I also tried using deepcopy instead of copy for k in dic: if type(k) == types

Pickling objects that refer to each other

若如初见. 提交于 2019-12-13 19:19:31
问题 I have three Python classes, Student , Event , and StudentEvent . For simplicity: class Student: def __init__(self, id): self.id = id class Event: def __init__(self, id): self.id = id self.studentevents = [] class StudentEvent: def __init__(self, student, event, id): self.student = student self.event = event self.id = id I have between thousands and millions of instances of each of these classes, which I put into dictionaries that I can read and analyze. Reading and creating the objects takes