问题
I'm working on a web crawler (please don't suggest an existing one, its not an option). I have it working the way it is expected to. My only issue is that currently I'm using a sort of server/client model where by the server does the crawling and processes the data, it then put it in a central location.
This location is an object create from a class i wrote. Internally the class maintains a hashmap defined as HashMap<String, HashMap<String, String>>
I store data in the map making the url the key (i keep these unique) and the hasmap value stores the corresponding data fields for that url such as title,value etc
I occasionally serialize the internal objects used but the spider is multi threaded and as soon as i have say 5 threads crawling the memory requirements go up exponentially.
To so far the performance has been excellent with the hashmap, crawling 15K urls in 2.r minutes with about 30 seconds CPU time so i really don't need to be pointed in the direction of an existing spider like most forum users have suggested.
Can anyone suggest a a fast disc based solution that will probably support concurrent reading & writing? The data structure doesnt have to be the same, just needs to be able to store related meta tag values together etc.
thanks in advance
回答1:
I suggest using EhCache for this, even though what you're building isn't really a cache. EhCache allows you to configure the cache instance so that it overflows to disc storage, while keeping the most recent items in memory. It can also be configured to be disc-persistent, i.e. data is flushed to disc on shutdown, and read back into memory at startup. On top of all that, it's key-value based, so it already fits your model. It supports concurrent access, and since the disk storage is managed as a separate thread, you shouldn't need to worry about disk access concurrency.
Alternatively, you could consider a proper embedded database such as Hypersonic (or numerous others of a similar style), but that's probably going to be more work.
回答2:
There is Tokyo Cabinet, which is a fast implementation of a disk-based hash table.
In your case, I think the best way to store values in such a setup would be to prefix the metadata keys with the url:
[url]_[name] => [value]
[url]_[name2] => [value2]
Unfortunately, I'm not sure you can enumerate the metadata for a given URL, using this solution.
If you want to use a more structured data store, there are also MongoDB, and SQLite, which I would recommend.
回答3:
JDBM2 library provides persistent maps for Java. Its fast and thread-safe.
UPDATE: Evolved into MapDB project
回答4:
what about using JPA in your class, and persist data in a database (that can be text based like sqlite) http://en.wikipedia.org/wiki/Java_Persistence_API
回答5:
Chronicle Map is an embeddable, hash-based Java data store, persisting the data to disk (to a single file), which targets to be a drop-in replacement of ConcurrentHashMap (provides the same ConcurrentMap interface). Chronicle Map is the fastest store among similar solutions and features excellent read/write concurrency, scaling almost linearly to the number of available cores in the machine.
Disclaimer: I'm the developer of Chronicle Map.
来源:https://stackoverflow.com/questions/3316630/java-disc-based-hashmap