Berkeley DB File Splitting

此生再无相见时 提交于 2019-12-22 08:59:39

问题


Our application uses berkeley db for temporary storage and persistance.A new issue has risen where tremendous data comes in from various input sources.Now the underlying file system does not support such large file sizes.Is there anyway to split the berkeley DB files into logical segments or partitions without losing data inside it.I also need it to set using berkeley DB properties and not cumbersome programming for this simple task.


回答1:


To my knowledge, BDB does not support this for you. You can however implement it yourself by creating multiple databases.

I did this before with BDB, programatically. i.e. My code partitioned a potentially large index file into seperate files and created a top level master index over those sub files.




回答2:


Modern BDB has means to add additional directories either using DB_CONFIG (recommended) or with API calls.

See if these directives (and corresponding API calls) help: add_data_dir set_create_dir set_data_dir set_lg_dir set_tmp_dir

Note that adding these directives is unlikely to transparently "Just Work", but it shouldn't be too hard to use db_dump/db_load to recreate the database files configured with these directives.



来源:https://stackoverflow.com/questions/7951031/berkeley-db-file-splitting

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!