I\'m trying to understand how Spark\'s cache work.
Here is my naive understanding, please let me know if I\'m missing something:
val rdd1 = sc.textF
Option B is an optimal approach with small tweak-in. Make use of less expensive action methods. In the approach mentioned by your code, saveAsTextFile is an expensive operation, replace it by count method.
Idea here is to remove the big rdd1 from DAG, if it's not relevant for further computation (after rdd2 and rdd3 are created)
Updated approach from code
val rdd1 = sc.textFile("some data").cache()
val rdd2 = rdd1.filter(...).cache()
val rdd3 = rdd1.map(...).cache()
rdd2.count
rdd3.count
rdd1.unpersist()