问题
I've noticed this behavior in sqlite. When I re-use the cursor object, the working set memory in the task manager keeps increasing until my program throws a out of memory exception.
I refactored the code such that each time I query I open a connection to the sqlite file query what I want and then close the connection.
The latter somehow seems to be not so memory-hungry. It doesn't increase beyond a certain point.
All I do with my sqlite db is simple select (which contains two aggregations) against a table.
Is this a behavior we can somehow control? I'd want to reuse my cursor object, but not want memory to be eaten up...
回答1:
See SQLite: PRAGMA cache_size
By default the cache size is fairly big (~2MB) and that is per-connection. You can set it smaller with the SQL statement:
PRAGMA cache_size=-KB
Use the negative '-' KB value to set it as a size in KiloBytes, otherwise it set the number of pages to use.
Also, if using multiple connections you might want to employ a shared cache to save memory: SQLITE: Shared Cache
回答2:
Yes, the sqlite3 module for Python uses a statement cache.
You can read about the cached_statements
parameter here.
More info on this issue.
来源:https://stackoverflow.com/questions/15069937/is-sqlite-caching-the-results-of-the-query-for-optimization