I am running kinesis plus spark application https://spark.apache.org/docs/1.2.0/streaming-kinesis-integration.html
I am running as below
command on ec2 inst
In one instance, I had this issue because I was asking for too many resources. This was on a small standalone cluster. The original command was
spark-submit --driver-memory 4G --executor-memory 7G -class "my.class" --master yarn --deploy-mode cluster --conf spark.yarn.executor.memoryOverhead my.jar
I succeeded in getting past 'Accepted' and into 'Running' by changing to
spark-submit --driver-memory 1G --executor-memory 3G -class "my.class" --master yarn --deploy-mode cluster --conf spark.yarn.executor.memoryOverhead my.jar
In other instances, I had this problem because of the way the code was written. We instantiated the spark context inside the class where it was used, and it did not get closed. We fixed the problem by instantiating the context first, passing it to the class where data is parallelized etc, then closing the context (sc.close()) in the caller class.