SparkR Job 100 Minutes Timeout
问题 I have written a bit complex sparkR script and run it using spark-submit. What script basically do is read a big hive/impala parquet based table row by row and generate new parquet file having same number of rows. But it seems the job stops after exactly around 100 Minutes which seems some timeout. For up to 500K rows script works perfectly (Because it needs less than 100 Minutes) For 1, 2 or 3 or more million rows script exits after 100 Minutes. I checked all possible parameter having values