Spark: processing multiple kafka topic in parallel

馋奶兔 提交于 2019-11-27 00:56:22

问题


I am using spark 1.5.2. I need to run spark streaming job with kafka as the streaming source. I need to read from multiple topics within kafka and process each topic differently.

  1. Is it a good idea to do this in the same job? If so, should I create a single stream with multiple partitions or different stream for each topic?
  2. I am using kafka direct steam. As far as I know, spark launches long running receivers for each partition. I have a relatively small cluster, 6 nodes with 4 cores each. If I have lot of topics and partitions in each topic, would the efficiency be impacted as most of the executors are busy with long running receivers? Please correct me if my understanding is wrong here

回答1:


I made the following observations, in case its helpful for someone:

  1. In kafka direct stream, the receivers are not run as long running tasks. At the beginning of each batch inerval, first the data is read from kafka in executors. Once read, the processing part takes over.
  2. If we create a single stream with multiple topics, the topics are read one after the other. Also, filtering the dstream for applying different processing logic would add another step to the job
  3. Creating multiple streams would help in two ways: 1. You don't need to apply the filter operation to process different topics differently. 2. You can read multiple streams in parallel (as opposed to one by one in case of single stream). To do so, there is an undocumented config parameter spark.streaming.concurrentJobs*. So, I decided to create multiple streams.

    sparkConf.set("spark.streaming.concurrentJobs", "4");
    



回答2:


I think the right solution depends on your use case.

If your processing logic is the same for data from all topics, then without doubt, this is a better approach.

If the processing logic is different, i guess you get a single RDD from all the topics and you have to create a pairedrdd for each processing logic and handle it separately. The problem is that this creates a sort of grouping to processing and the overall processing speed will be determined by the topic which needs the longest time to process. So topics with less data have to wait till data from all topics are processed. One advantage is that if its a timeseries data, then the processing proceeds together which might be a good thing.

Another advantage of running independent jobs is that you get better control and can adjust your resource sharing. For eg: jobs which process topic with high throughput can be allocated a higher CPU/memory.



来源:https://stackoverflow.com/questions/34430636/spark-processing-multiple-kafka-topic-in-parallel

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!