Here is what I am trying to do.
I have created two nodes of DataStax enterprise cluster,on top of which I have created a java program to get the count of one table (
In my case, the problem was that I had the following line in $SPARK_HOME/conf/spark-env.sh
:
SPARK_EXECUTOR_MEMORY=3g
of each worker,
and the following line in $SPARK_HOME/conf/spark-default.sh
spark.executor.memory 4g
in the "master" node.
The problem went away once I changed 4g to 3g. I hope that this will help someone with the same issue. The other answers helped me spot this.