Free Large datasets to experiment with Hadoop

血红的双手。 提交于 2019-11-29 19:22:44

Few points about your question regarding crawling and wikipedia.

You have linked to the wikipedia data dumps and you can use the Cloud9 project from UMD to work with this data in Hadoop.

They have a page on this: Working with Wikipedia

Another datasource to add to the list is:

  • ClueWeb09 - 1 billion webpages collected between Jan and Feb 09. 5TB Compressed.

Using a crawler to generate data should be posted in a separate question to one about Hadoop/MapReduce I would say.

APC

One obvious source: the Stack Overflow trilogy data dumps. These are freely available under the Creative Commons license.

This is a collection of 189 datasets for machine learning (which is one of the nicest applications for hadoop g): http://archive.ics.uci.edu/ml/datasets.html

It's no log file but maybe you could use the planet file from OpenStreetMap: http://wiki.openstreetmap.org/wiki/Planet.osm

CC licence, about 160 GB (unpacked)

There are also smaller files for each continent: http://wiki.openstreetmap.org/wiki/World

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!