Is it better to send data to hbase via one stream or via several servers concurrently?

可紊 提交于 2019-12-10 18:43:51

问题


I'm sorry if this question is basic(I'm new to nosql). Basically I have a large mathimatical process that I'm splitting up and having different servers process and send the result to an hbase database. Each server computing the data, is an hbase regional server, and has thrift on it.

I was thinking of each server processing the data and then updating hbase locally(via thrift). I'm not sure if this is the best approach because I don't fully understand how the master(named) node will handle the upload/splitting.

I'm wondering what the best practice is when uploading large amounts of data(in total I suspect it'll be several million rows)? Is it okay to send it to regional servers or should everything go through the master?


回答1:


From this blog post,

The general flow is that a new client contacts the Zookeeper quorum (a separate cluster of Zookeeper nodes) first to find a particular row key. It does so by retrieving the server name (i.e. host name) that hosts the -ROOT- region from Zookeeper. With that information it can query that server to get the server that hosts the .META. table. Both of these two details are cached and only looked up once. Lastly it can query the .META. server and retrieve the server that has the row the client is looking for.

Once it has been told where the row resides, i.e. in what region, it caches this information as well and contacts the HRegionServer hosting that region directly. So over time the client has a pretty complete picture of where to get rows from without needing to query the .META. server again.

I am assuming you directly use the thrift interface. In that case, even if you call any mutation from a particular regionserver, that regionserver only acts as a client. It will contact Zookeeper quorum, then contact Master to get the regions where to write the data and proceed in the same way as if it was written from another regionserver.

Is it okay to send it to regional servers or should everything go through the master?

Both are same. There is no such thing as writing directly to regionserver. Master will have to be contacted to determine which region to write the output to.

If you are using a hadoop map-reduce job, and using the Java API for the mapreduce job, then you can use the TableOutputFormat to write directly to HFiles without going through the HBase API. It is about ~10x faster than using the API.



来源:https://stackoverflow.com/questions/7638270/is-it-better-to-send-data-to-hbase-via-one-stream-or-via-several-servers-concurr

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!