pickle

Python - appending to a pickled list

主宰稳场 提交于 2020-01-01 08:20:48
问题 I'm struggling to append a list in a pickled file. This is the code: #saving high scores to a pickled file import pickle first_name = input("Please enter your name:") score = input("Please enter your score:") scores = [] high_scores = first_name, score scores.append(high_scores) file = open("high_scores.dat", "ab") pickle.dump(scores, file) file.close() file = open("high_scores.dat", "rb") scores = pickle.load(file) print(scores) file.close() The first time I run the code, it prints the name

Python — read_pickle ImportError: No module named indexes.base

白昼怎懂夜的黑 提交于 2020-01-01 07:29:08
问题 I write a numeric dataframe to .pkl file in a machine (df.to_pickle()), for some reason, I have to open this file in a different machine (pd.read_pickle()), I get an Import Error saying: No module named indexes.base, and when I try to import indexes, there doesn't seem to have one. When I tried to_csv in a machine and read_csv in a different one, it worked. Many Thanks! ImportError Traceback (most recent call last) <ipython-input-199-2be4778e3b0a> in <module>() ----> 1 pd.read_pickle("test

Python — read_pickle ImportError: No module named indexes.base

时光毁灭记忆、已成空白 提交于 2020-01-01 07:29:07
问题 I write a numeric dataframe to .pkl file in a machine (df.to_pickle()), for some reason, I have to open this file in a different machine (pd.read_pickle()), I get an Import Error saying: No module named indexes.base, and when I try to import indexes, there doesn't seem to have one. When I tried to_csv in a machine and read_csv in a different one, it worked. Many Thanks! ImportError Traceback (most recent call last) <ipython-input-199-2be4778e3b0a> in <module>() ----> 1 pd.read_pickle("test

Keras “pickle_safe”: What does it mean to be “pickle safe”, or alternatively, “non picklable” in Python?

旧巷老猫 提交于 2020-01-01 05:02:29
问题 Keras fit_generator() has a parameter pickle_safe which defaults to False . Training can run faster if it is pickle_safe, and accordingly set the flag to True ? According to Kera's docs: pickle_safe : If True, use process based threading. Note that because this implementation relies on multiprocessing, you should not pass non picklable arguments to the generator as they can't be passed easily to children processes. I don't understand exactly what this is saying. How can I determine if my

Load data from Python pickle file in a loop?

拜拜、爱过 提交于 2020-01-01 03:33:39
问题 In a small data-acquisition project we use the Python's pickle to store recorded data, i.e. for each "event" we add it to the output file f with pkl.dump(event, f, pkl.HIGHEST_PROTOCOL) where import cPickle as pkl . In the analysis of the data we read each event, but in contrast to a normal file where processing can be one in a rather elegant way: with open(filename) as f: for line in f: do_something() looping over all the data in a pickle file this becomes a bit more awkward: with open

Preserving numpy view when pickling

不打扰是莪最后的温柔 提交于 2020-01-01 03:06:07
问题 By default, pickling a numpy view array loses the view relationship, even if the array base is pickled too. My situation is that I have some complex container objects which are pickled. And in some cases, some contained data are views in some others. Saving a independent array of each view is not only a loss of space but also, the reloaded data have lost the view relationship. A simple example would be (but in my case the container are more complex than a dictionary): import numpy as np

How to close the file after pickle.load() in python

霸气de小男生 提交于 2020-01-01 02:19:39
问题 I saved a python dictionary in this way: import cPickle as pickle pickle.dump(dictname, open("filename.pkl", "wb")) And I load it in another script in this way: dictname = pickle.load(open("filename.pkl", "rb")) How is it possible to close the file after this? 回答1: It's better to use a with statement instead, which closes the file when the statement ends, even if an exception occurs: with open("filename.pkl", "wb") as f: pickle.dump(dictname, f) ... with open("filename.pkl", "rb") as f:

Fastest way to store large files in Python

最后都变了- 提交于 2020-01-01 02:15:14
问题 I recently asked a question regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via write() . Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory. 回答1: Python code would be extremely

How to put my dataset in a .pkl file in the exact format and data structure used in “mnist.pkl.gz”?

你说的曾经没有我的故事 提交于 2019-12-31 10:45:53
问题 I'm trying to use the Theano library in python to do some experiments with Deep Belief Networks. I use the code in this address: DBN full code. This code use the MNIST Handwritten database. This file is already in pickle format. It is unpicked in: train_set valid_set test_set Which is further unpickled in: train_set_x, train_set_y = train_set valid_set_x, valid_set_y = valid_set test_set_x, test_set_y = test_set Please can someone give me the code that constructs this dataset in order to

What are the pitfalls of using Dill to serialise scikit-learn/statsmodels models?

北慕城南 提交于 2019-12-31 08:57:27
问题 I need to serialise scikit-learn/statsmodels models such that all the dependencies (code + data) are packaged in an artefact and this artefact can be used to initialise the model and make predictions. Using the pickle module is not an option because this will only take care of the data dependency (the code will not be packaged). So, I have been conducting experiments with Dill. To make my question more precise, the following is an example where I build a model and persist it. from sklearn