File indexing (using Binary trees?) in Python

大城市里の小女人 提交于 2019-12-05 14:12:34

Why reinvent the wheel? By all means, index the files, but use Whoosh, or Lucene, etc.

Edit: you hadn't stated your performance requirements at the time I posted this answer. You're not going to be able to index "millions of rows per hour" with off-the-shelf software.

The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.

To reduce the time spent waiting for I/O, your best bet is compression.

Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything.

If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.

Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.

That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.

There are a number of file-based DBMSes designed for the sort of data you describe. One of the most widely used is Berkeley DB. It's open source, now owned by Oracle, and comes in C and Java flavors (with Python and other language wrappers for the C version).

Since your data is read-only, though, there are other options. One that comes to mind is CDB. It likewise has Python interfaces, as well as a pure Python reimplementation.

Or, if you really like implementing things yourself, you could always implement SSTables from Google's Bigtable paper. ;)

sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!