pickle

Python - How can I make this un-pickleable object pickleable?

核能气质少年 提交于 2019-12-04 23:37:30
问题 So, I have an object that has quite a bit of non-pickleable things in it (pygame events, orderedDicts, clock, etc.) and I need to save it to disk. Thing is, if I can just get this thing to store a string that has the progress (a single integer is all I need), then I can pass it to the object's init and it will rebuild all of those things. Unfortunately, a framework I am using (Renpy) will pickle the object and attempt to load it, despite the fact that I could save it as a single integer, and

How to get around the pickling error of python multiprocessing without being in the top-level?

大兔子大兔子 提交于 2019-12-04 21:36:18
问题 I've researched this question multiple times, but haven't found a workaround that either works in my case, or one that I understand, so please bear with me. Basically, I have a hierarchical organization of functions, and that is preventing me from multiprocessing in the top-level. Unfortunately, I don't believe I can change the layout of the program - because I need all the variables that I create after the initial inputs. For example, say I have this: import multiprocessing def calculate(x):

Python pickle to xml

本小妞迷上赌 提交于 2019-12-04 21:34:43
How can I convert to a pickle object to a xml document? For example, I have a pickle like that: cpyplusplus_test Coordinate p0 (I23 I-11 tp1 Rp2 . I want to get something like: <Coordinate> <x>23</x> <y>-11</y> </Coordinate> The Coordinate class has x and y attributes of course. I can supply a xml schema for conversion. I tried gnosis.xml module. It can objectify xml documents to python object. But it cannot serialize objects to xml documents like above. Any suggestion? Thanks. gnosis.xml does support pickling to XML: import gnosis.xml.pickle xml_str = gnosis.xml.pickle.dumps(obj) To

Multiprocessing using chunks does not work with predict_proba

*爱你&永不变心* 提交于 2019-12-04 20:45:24
When I run predict_proba on a dataframe without multiprocessing I get the expected behavior. The code is as follows: probabilities_data = classname.perform_model_prob_predictions_nc(prediction_model, vectorized_data) where: perform_model_prob_predictions_nc is: def perform_model_prob_predictions_nc(model, dataFrame): try: return model.predict_proba(dataFrame) except AttributeError: logging.error("AttributeError occurred",exc_info=True) But when I try to run the same function using chunks and multiprocessing: probabilities_data = classname.perform_model_prob_predictions(prediction_model, chunks

Python class: Employee management system

若如初见. 提交于 2019-12-04 18:50:28
This exercise assumes that you have created the Employee class for Programming Exercise 4. Create a program that stores Employee objects in a dictionary. Use the employee ID number as the key. The program should present a menu that lets the user perform the following actions: • Look up an employee in the dictionary • Add a new employee to the dictionary • Change an existing employee’s name, department, and job title in the dictionary • Delete an employee from the dictionary • Quit the program When the program ends, it should pickle the dictionary and save it to a file. Each time the program

python 2.6 cPickle.load results in EOFError

时光怂恿深爱的人放手 提交于 2019-12-04 17:45:50
问题 I use cPickle to pickle a list of integers, using HIGHEST_PROTOCOL, cPickle.dump(l, f, HIGHEST_PROTOCOL) When I try to unpickle this using the following code, I get an EOFError. I tried 'seeking' to offset 0 before unpickling, but the error persists. l = cPickle.load(f) Any ideas? 回答1: If you are on windows, make sure you open(filename, 'wb') # for writing open(filename, 'rb') # for reading 来源: https://stackoverflow.com/questions/2187558/python-2-6-cpickle-load-results-in-eoferror

write() argument must be str, not bytes [duplicate]

依然范特西╮ 提交于 2019-12-04 16:45:29
问题 This question already has answers here : Using pickle.dump - TypeError: must be str, not bytes (2 answers) Closed 3 years ago . I'm a beginner programmer and am working through the book python for the absolute beginner. I have come across a problem trying to write a high scoring function for the trivia game. when the function 'highscore(user, highscore):' is called on I try to assign the arguments accordingly so I can pickle the information to a file for later use. however I am running into

How to avoid pickling errors when sharing objects between threads?

此生再无相见时 提交于 2019-12-04 16:43:01
I have a program in which I need to store a global variable into a file. I am doing this using the pickle module. I have another thread (Daemon= False , from threading module) which sometimes changes the value of the global variable. The value is also modified in global scope(the main program). I am dumping the value of the variable into a .pkl file every 5 seconds (using another thread from threading module). But I found the following error when dump method was executed: TypeError: can't pickle _thread.lock objects Why is this happening? And what can I do to fix it? Note: I have found some

Pickling a django model with a binary field in cacheops

梦想的初衷 提交于 2019-12-04 16:35:45
I have a simple Django model with a binary field that I would like to pickle. class MyModel(models.Model): bin_data = models.BinaryField() From the context of my unittests, I do the following: import pickle tmp_obj = MyModel.objects.create(bin_data="12345") obj = MyModel.objects.get(pk=tmp_obj.pk) # load from DB data = pickle.dumps(obj) obj2 = pickle.loads(data) However the pickle.dumps() fails with: TypeError: can't pickle buffer objects When I use the following command to pickle: data = pickle.dumps(obj, protocol=-1) The dump succeeds but pickle.loads() fails with: TypeError: buffer() takes

Can I store a python dictionary in google's BigTable datastore without serializing it explicitly?

喜夏-厌秋 提交于 2019-12-04 14:41:01
问题 I have a python dictionary that I would like to store in Google's BigTable datastore (it is an attribute in a db.Model class). Is there an easy way to do this? i.e. using a db.DictionaryProperty ? Or do I have to use pickle to serialize my dictionary? My dictionary is relatively straight forward. It consists of strings as keys, but it may also contain sub dictionaries for some keys. For example: { 'myKey' : 100, 'another' : 'aha', 'a sub dictionary' : { 'a': 1, 'b':2 } } PS: I would like to