shelve

How to use dill library for object serialization with shelve library

烂漫一生 提交于 2021-02-16 21:26:09
问题 I'm using PyMemoize library to cache coroutine. I decorated the coroutine, but when Python calls it, I get: TypeError: can't pickle coroutine objects This happens because PyMemoize internally tries to pickle coroutine and store it inside Redis. For this, it uses shelve.Shelf , which in turn uses pickle . The problem is that, by unknown reason, pickle doesn't support pickling coroutines. I've tried to pickle coroutines with dill and it worked. How do I tell shelve to use dill as serialization

How to use dill library for object serialization with shelve library

て烟熏妆下的殇ゞ 提交于 2021-02-16 21:24:17
问题 I'm using PyMemoize library to cache coroutine. I decorated the coroutine, but when Python calls it, I get: TypeError: can't pickle coroutine objects This happens because PyMemoize internally tries to pickle coroutine and store it inside Redis. For this, it uses shelve.Shelf , which in turn uses pickle . The problem is that, by unknown reason, pickle doesn't support pickling coroutines. I've tried to pickle coroutines with dill and it worked. How do I tell shelve to use dill as serialization

Shelve writes blank lists/dictionarys

為{幸葍}努か 提交于 2021-01-29 03:24:02
问题 I'm trying to make an API to create custom maps on a game I made, but shelve just writes blank lists and dictionarys. I try to store it like that: "name" for the shown name of the map "name_1" for the name that is treated as variable "textures" for the filepath of the texture "wind" for the amount of wind that's blowing Heres my code: import tkinter.filedialog as tfd import shelve import shutil as os file1 = shelve.open("info") if "written" in file1: pass else: file1["name"] = [] file1["name

Reduce Python shelve size

寵の児 提交于 2021-01-27 11:42:14
问题 I'm using the shelve module with gdbm to store Python objects. I understand that shelve uses pickle to store the objects. Unfortunately, the sizes of my shelves are too large. I found this solution to bzip2 or gzip individual pickles. My question is: is there a way to make shelve compress all its pickles? 来源: https://stackoverflow.com/questions/53990501/reduce-python-shelve-size

Reduce Python shelve size

匆匆过客 提交于 2021-01-27 11:41:56
问题 I'm using the shelve module with gdbm to store Python objects. I understand that shelve uses pickle to store the objects. Unfortunately, the sizes of my shelves are too large. I found this solution to bzip2 or gzip individual pickles. My question is: is there a way to make shelve compress all its pickles? 来源: https://stackoverflow.com/questions/53990501/reduce-python-shelve-size

Reduce Python shelve size

十年热恋 提交于 2021-01-27 11:39:25
问题 I'm using the shelve module with gdbm to store Python objects. I understand that shelve uses pickle to store the objects. Unfortunately, the sizes of my shelves are too large. I found this solution to bzip2 or gzip individual pickles. My question is: is there a way to make shelve compress all its pickles? 来源: https://stackoverflow.com/questions/53990501/reduce-python-shelve-size

Read complexity for Python shelve

空扰寡人 提交于 2020-04-18 06:39:04
问题 I am using Python's shelve module for sometime. It almost feels like a very good solution for on-disk version of a Python dictionary. My question is: if its serialized on the disk and loading it doesn't load the whole shelve-d object in memory, what is the read complexity associated with it? Let's say, I have an object containing a few million key/value pairs stored in a shelve db. I usually go about: import shelve key = "1234" d = shelve.open("path/to/shelve/db") output = d[key] d.close()

Python 2.7.2 shelve fails on OSX

北战南征 提交于 2020-01-03 00:52:10
问题 Using the shelve standard lib on Python 2.7.2, I wrote an extremely simple test to create a persistent data file and then immediately open it for printing: import os import shelve shelf_filename = str(__file__.split('.')[0] + '.dat') #delete the shelf file if it exists already. try: os.remove(shelf_filename) print "DELETED LEFTOVER SHELF FILE", shelf_filename except OSError: pass #create a new shelf, write some data, and flush it to disk shelf_handle = shelve.open(shelf_filename) print

Persistent multiprocess shared cache in Python with stdlib or minimal dependencies

拥有回忆 提交于 2020-01-01 09:59:10
问题 I just tried a Python shelve module as the persistent cache for data fetched from the external service. The complete example is here. I was wondering what would the best approach if I want to make this multiprocess safe? I am aware of redis, memcached and such "real solutions", but I'd like to use only the parts of Python standard library or very minimal dependencies to keep my code compact and not introduce unnecessary complexity when running the code in single process - single thread model.

Using python shelve cross-platform

送分小仙女□ 提交于 2020-01-01 02:29:13
问题 I am hoping for a little advice on shelves/databases in Python. Problem: I have a database created on the mac, that I want to use on windows 7. I use Python 3.2, MacOS 10.7, and win 7. When I open and save my shelve on the mac all is good and well. I get a file with a ".db" extension. On my windows-python it is not recognized. I can however create a new db on the pc and get files with ".bak, dat, .dir" extensions. I am guessing that the python on the pc does not have the same underlying