I\'m running a Spark job with in a speculation mode. I have around 500 tasks and around 500 files of 1 GB gz compressed. I keep getting in each job, for 1-2 tasks, the attac
I solved this error increasing the allocated memory in executorMemory and driverMemory. You can do this in HUE selecting the Spark Program which is causing the problem and in properties -> Option list you can add something like this:
--driver-memory 10G --executor-memory 10G --num-executors 50 --executor-cores 2
Of course the values of the parameters will vary depending on you cluster's size and your needs.