Number of executors and cores

北城以北 提交于 2019-12-13 07:57:52

问题


I am new to spark and would like to know how many cores and executors have to be used in a spark job and AWS if we have 2 slave c4.8xlarge nodes and 1 c4.8x large master node. I have tried different combinations but not able to understand the concept.

Thank you.


回答1:


Cloudera guys gave good explanation on that

https://www.youtube.com/watch?v=vfiJQ7wg81Y

If, let's say you have 16 cores on your node(I think this is exactly your case), then you give 1 for yarn to manage this node, then you devide 15 to 3, so each executor has 5 cores. Also, you have java overhead which is Max(384M, 0.07*spark.executor.memory). So, if you have 3 executors per node, then you have 3*Max(384M, 0.07*spark.executor.memory) overhead for JVMs, the rest can be used for memory containers.

However, on a cluster with many users working simultaneously, yarn can push your spark session out of some containers, making spark go all the way back through the DAG and bringing all the RDD to the present state, which is bad. That is why you need to make --num-executors, --executor-memory and --executor-cores slightly less to give some space to other users in advance. But this doesn't apply to AWS where you are the only one user.

--executor-memory 18Gb should work for you btw

More details on turning your cluster parameters http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/



来源:https://stackoverflow.com/questions/43457175/number-of-executors-and-cores

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!