Hadoop and MySQL Integration

微笑、不失礼 提交于 2019-12-06 18:08:21

问题


We would like to implement Hadoop on our system to improve its performance.

The process works like this: Hadoop will gather data from MySQL database then process it. The output will then be exported back to MySQL database.

Is this a good implementation? Will this improve our system's overall performance? What are the requirements and has this been done before? A good tutorial would really help.

Thanks


回答1:


Altough it is not a regular hadoop usage. It migh make sense in following scenario:
a) If you have good way to partition your data into the inputs (like existing partitioning).
b) The processing of each partition is relatively heavy. I would give the number of at least 10 seconds of CPU time per partition.
If both conditions are met - you will be able to apply any desired amount of CPU power to make your data processing.
If your are doing simple scan or aggregation - I think your will not gain anything. On other hand - if your are going to run some CPU intensive algorithms on each partition - then indeed your gain can be significant.
I would also mention a separate case- if your processing require massive data sorting. I do not think that MySQL will be good in sorting billions of records. Hadoop will do it.




回答2:


Sqoop is a tool designed to import data from relational databases into Hadoop

https://github.com/cloudera/sqoop/wiki/

and a video about it http://www.cloudera.com/blog/2009/12/hadoop-world-sqoop-database-import-for-hadoop/




回答3:


Hadoop is used for batch based jobs mostly on large sized semi structured data.. Batch in the sense even the shortest jobs is in the order of magnitudes of minutes. What kind of performance problem you are facing? Is it based on data transformations or reporting. Depending on that this architecture may help or make things worse.




回答4:


As mentioned by Joe, Sqoop is a great tool of the Hadoop ecosystem to import and export data from and to SQL databases such as MySQl.

If you need more complex integration of MySQL including e.g. filtering or tranformation, then you should use an integration framework or integration suite for this problem. Take a look at my presentation "Big Data beyond Hadoop - How to integrate ALL your data" for more information about how to use open source integration frameworks and integration suites with Hadoop.




回答5:


I agree with Sai. I'm using Hadoop with MySql only when needed. I export the table into CSV and upload it to HDFS to process data more quickly. If you want to persist your processed data, you will have to write a single-reducer job that will do some kind of batchinserts to improve the performance of insertion.
BUT that really depends on what kind of things you want to do.



来源:https://stackoverflow.com/questions/4800994/hadoop-and-mysql-integration

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!