I\'m using spark in order to calculate the pagerank of user reviews, but I keep getting Spark java.lang.StackOverflowError
when I run my code on a big dataset (
I have multiple suggestions which will help you to greatly improve the performance of the code in your question.
An example is RDD.
count
— to tell you the number of lines in the file, the file needs to be read. So if you write RDD.count
, at this point the file will be read, the lines will be counted, and the count will be returned.What if you call RDD.
count
again? The same thing: the file will be read and counted again. So what does RDD.cache
do? Now, if you run RDD.count
the first time, the file will be loaded, cached, and counted. If you call RDD.count
a second time, the operation will use the cache. It will just take the data from the cache and count the lines, no recomputing.
Read more about caching here.
In your code sample you are not reusing anything that you've cached. So you may remove the .cache
from there.
rddFileData
, rddMovieData
and rddPairReviewData
steps so that it happens in one go. Get rid of .collect
since that brings the results back to the driver and maybe the actual reason for your error.
This problem will occur when your DAG grows big and too many level of transformations happening in your code. The JVM will not be able to hold the operations to perform lazy execution when an action is performed in the end.
Checkpointing is one option. I would suggest to implement spark-sql for this kind of aggregations. If your data is structured, try to load that into dataframes and perform grouping and other mysql functions to achieve this.
When your for loop grows really large, Spark can no longer keep track of the lineage. Enable checkpointing in your for loop to checkpoint your rdd every 10 iterations or so. Checkpointing will fix the problem. Don't forget to clean up the checkpoint directory after.
http://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing