berkeley-db

Berkeley DB File Splitting

非 Y 不嫁゛ 提交于 2019-12-05 16:46:27
Our application uses berkeley db for temporary storage and persistance.A new issue has risen where tremendous data comes in from various input sources.Now the underlying file system does not support such large file sizes.Is there anyway to split the berkeley DB files into logical segments or partitions without losing data inside it.I also need it to set using berkeley DB properties and not cumbersome programming for this simple task. To my knowledge, BDB does not support this for you. You can however implement it yourself by creating multiple databases. I did this before with BDB,

How can I use the Berkeley DB within an iOS application?

有些话、适合烂在心里 提交于 2019-12-04 15:42:25
I would like to use the Berkeley DB within an iOS application, but I'm not sure how to go about this. How do you integrate the Berkeley DB within an iOS project? How do you communicate with it via Objective-C? Are there any tutorials or examples out there that might demonstrate how to do this? N_A The first thing to note is that the library is C++, not objective-c. This isn't an issue since objective-c can call C++. Also, there isn't much in the way of tutorials, but here is what you will need to do it yourself: Download API Everything you probably need to know to install is here The specific

Berkeley DB mismatch error while configuring LDAP

此生再无相见时 提交于 2019-12-04 14:07:08
问题 I'm configuring OPENLDAP 2.4.35. on Redhat Linux, I have already installed Berkley DB 4.8.30 as a prerequisite. I also checked the version compatibility from OPENLDAP's README file, which says: SLAPD: BDB and HDB backends require Oracle Berkeley DB 4.4 - 4.8, or 5.0 - 5.1. It is highly recommended to apply the patches from Oracle for a given release. Still I'm getting this error: checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB

SQLite Optimization for Millions of Entries?

北城以北 提交于 2019-12-04 11:01:48
I'm trying to tackle a problem by using a SQLite database and Perl modules. In the end, there will be tens of millions of entries I need to log. The only unique identifier for each item is a text string for the URL. I'm thinking of doing this in two ways: Way #1: Have a good table, bad table, unsorted table. (I need to check the html and decide whether I want it.) Say we have 1 billion pages total, 333 million URLs in each table. I have a new URL to add, and I need to check and see if it's in any of the tables, and add it to the Unsorted if it is unique. Also, I would be moving a lot of rows

What is the proper way to access BerkeleyDB with Perl?

对着背影说爱祢 提交于 2019-12-04 05:35:34
I've been having some problems with using BerkeleyDB. I have multiple instances of the same code pointed to a single repository of DB files, and everything runs fine for 5-32 hours, then suddenly there is a deadlock. The command prompts stop right before executing a db_get or db_put or cursor creation call. So I'm simply asking for the proper way to handle these calls. Here's my general layout: This is how the environment and DBs are created: my $env = new BerkeleyDB::Env ( -Home => "$dbFolder\\" , -Flags => DB_CREATE | DB_INIT_CDB | DB_INIT_MPOOL) or die "cannot open environment: $BerkeleyDB:

Berkeleydb - B-Tree versus Hash Table

你说的曾经没有我的故事 提交于 2019-12-03 13:45:18
I am trying to understand what should drive the choice of the access method while using a BerkeleyDB : B-Tree versus HashTable. A Hashtable provides O(1) lookup but inserts are expensive (using Linear/Extensible hashing we get amortized O(1) for insert). But B-Trees provide log N (base B) lookup and insert times. A B-Tree can also support range queries and allow access in sorted order. Apart from these considerations what else should be factored in? If I don't need to support range queries can I just use a Hashtable access method? When your data sets get very large, B-trees are still better

Looking for a lightweight java-compatible in-memory key-value store [closed]

落花浮王杯 提交于 2019-12-03 11:33:18
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . Berkeley DB would be the best choice probably but I can't use it due to licensing issues. Are there any alternatives? 回答1: You can try Hazelcast. Just add hazelcast.jar to your classpath. And start coding java.util.Map map = Hazelcast.getMap("myMap"); You'll get an in-memory, distributed, dynamically scalable

Berkeley DB mismatch error while configuring LDAP

[亡魂溺海] 提交于 2019-12-03 08:43:54
I'm configuring OPENLDAP 2.4.35. on Redhat Linux, I have already installed Berkley DB 4.8.30 as a prerequisite. I also checked the version compatibility from OPENLDAP's README file, which says: SLAPD: BDB and HDB backends require Oracle Berkeley DB 4.4 - 4.8, or 5.0 - 5.1. It is highly recommended to apply the patches from Oracle for a given release. Still I'm getting this error: checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB major version in db.h... 4 checking for Berkeley DB minor version in db.h... 8 checking if Berkeley DB

Looking for a drop-in replacement for a java.util.Map

纵饮孤独 提交于 2019-12-03 06:01:23
问题 Problem Following up on this question, it seems that a file- or disk-based Map implementation may be the right solution to the problems I mentioned there. Short version: Right now, I have a Map implemented as a ConcurrentHashMap . Entries are added to it continually, at a fairly fixed rate. Details on this later. Eventually, no matter what, this means the JVM runs out of heap space. At work, it was (strongly) suggested that I solve this problem using SQLite, but after asking that previous

Building OpenLDAP from sources and missing BerkelyDB

夙愿已清 提交于 2019-12-03 05:42:19
问题 I'm building OpenLDAP on a RHEL 5; I used instructions found at http://www.linux.com/archive/feature/113607. All went well, until running './configure' for OpenLDAP - the following error was recorded: *<earlier output snipped>* checking for gethostbyaddr_r... yes checking number of arguments of ctime_r... 2 checking number of arguments of gethostbyname_r... 6 checking number of arguments of gethostbyaddr_r... 8 checking db.h usability... yes checking db.h presence... yes checking for db.h...