pickle

_pickle.UnpicklingError: could not find MARK

大憨熊 提交于 2019-12-10 02:21:06
问题 I got exceptions like UnicodeDecodeError raised when pickling (a list of) objects of EventFrame with a member participants that was an empty set. class EventFrame: """Frame for an event""" def __init__(self, id=0): ... self.participants = set() ... When it wasn't empty, there were no problems, so I first set participants to something and then pickled it. But during runtime it may happen that participants is emptied again. So I tried to manually delete the object in this case. After that I

Python pickle to xml

别来无恙 提交于 2019-12-09 23:36:11
问题 How can I convert to a pickle object to a xml document? For example, I have a pickle like that: cpyplusplus_test Coordinate p0 (I23 I-11 tp1 Rp2 . I want to get something like: <Coordinate> <x>23</x> <y>-11</y> </Coordinate> The Coordinate class has x and y attributes of course. I can supply a xml schema for conversion. I tried gnosis.xml module. It can objectify xml documents to python object. But it cannot serialize objects to xml documents like above. Any suggestion? Thanks. 回答1: gnosis

Caching Matplotlib with Memcache (Wont Pickle)

☆樱花仙子☆ 提交于 2019-12-09 18:36:21
问题 I have a chart that is rendered takes 3 seconds and then subcharts that can be made from said chart where things are added to it. I want to cache the axes from the main chart so that I can retrieve it and modify it later when rendering the subcharts. How can I get past this error? Heres a sample test code: import pylibmc cache = pylibmc.Client(["127.0.0.1"], binary=True, behaviors={"tcp_nodelay": True, "ketama": True}) import matplotlib.pyplot as plt cache_name = 'test' fig = plt.figure

Lightweight pickle for basic types in python?

浪尽此生 提交于 2019-12-09 17:33:23
问题 All I want to do is serialize and unserialize tuples of strings or ints. I looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects. marshal is a little better in terms of space but the result is full of nasty \x00 bytes. Ideally I would like the result to be human readable. I thought of just using repr() and eval(), but is there a simple

Python, writing an integer to a '.txt' file

北慕城南 提交于 2019-12-09 16:19:46
问题 Would using the pickle function be the fastest and most robust way to write an integer to a text file? Here is the syntax I have so far: import pickle pickle.dump(obj, file) If there is a more robust alternative, please feel free to tell me. My use case is writing an user input: n=int(input("Enter a number: ")) Yes, A human will need to read it and maybe edit it There will be 10 numbers in the file Python may need to read it back later. 回答1: I think it's simpler doing: number = 1337 with open

How to pickle Keras model?

徘徊边缘 提交于 2019-12-09 15:32:41
问题 Official documents state that "It is not recommended to use pickle or cPickle to save a Keras model." However, my need for pickling Keras model stems from hyperparameter optimization using sklearn's RandomizedSearchCV (or any other hyperparameter optimizers). It's essential to save the results to a file, since then the script can be executed remotely in a detached session etc. Essentially, I want to: trial_search = RandomizedSearchCV( estimator=keras_model, ... ) pickle.dump( trial_search,

Append list of Python dictionaries to a file without loading it

梦想的初衷 提交于 2019-12-09 05:32:58
问题 Suppose I need to have a database file consisting of a list of dictionaries: file: [ {"name":"Joe","data":[1,2,3,4,5]}, { ... }, ... ] I need to have a function that receives a list of dictionaries as shown above and appends it to the file. Is there any way to achieve that, say using json (or any other method), without loading the file? EDIT1: Note: What I need, is to append new dictionaries to an already existing file on the disc. 回答1: You can use json to dump the dicts, one per line. Now

What is the recommended way to persist (pickle) custom sklearn pipelines?

蓝咒 提交于 2019-12-09 03:09:05
问题 I have built an sklearn pipeline that combines a standard support vector regression component with some custom transformers that create features. This pipeline is then put into an object which is trained and then pickled (this seems to be the recommended way). The unpickled object is used to make predictions. For distribution, this is turned into an executable file with pyinstaller. When I call the unpickled regression object from a unit test, it works fine. However, when I attempt to use the

Saving an sklearn `FunctionTransformer` with the function it wraps

柔情痞子 提交于 2019-12-08 19:13:46
问题 I am using sklearn 's Pipeline and FunctionTransformer with a custom function from sklearn.externals import joblib from sklearn.preprocessing import FunctionTransformer from sklearn.pipeline import Pipeline This is my code: def f(x): return x*2 pipe = Pipeline([("times_2", FunctionTransformer(f))]) joblib.dump(pipe, "pipe.joblib") del pipe del f pipe = joblib.load("pipe.joblib") # Causes an exception And I get this error: AttributeError: module '__ main__' has no attribute 'f' How can this be

Unpicking data pickled in Python 2.5, in Python 3.1 then uncompressing with zlib

一笑奈何 提交于 2019-12-08 16:05:22
问题 In Python 2.5 I stored data using this code: def GLWriter(file_name, string): import cPickle import zlib data = zlib.compress(str(string)) file = open(file_name, 'w') cPickle.dump(data, file) It worked fine, I was able to read that data by doing that process in reverse. It didn't need to be secure, just something that wasn't readable to the human eye. If I put "test" into it and then opened the file it created, it looked like this: S'x\x9c+I-.\x01\x00\x04]\x01\xc1' p1 . For various reasons we