Should I leave the variable as transient?

妖精的绣舞 提交于 2020-01-12 10:22:26

问题


I have been experimenting with Apache Spark trying to solve some queries like top-k, skyline etc.

I have made a wrapper which encloses SparkConf and JavaSparkContext named SparkContext. This class also implements serializable but since SparkConf and JavaSparkContext are not serializable then the class isn't either.

I have a class solving the topK query named TopK, the class implements serializable but the class also has a SparkContext member variable which is not serializable (for the reason above). Therefore I am getting an exception whenever I try to execute a TopK method from within a .reduce() function in an RDD.

The solution I have found is to make SparkContext transient.

My question is: Should I keep the SparkContext variable as transient or am I doing a big mistake?

SparkContext class:

import java.io.Serializable;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.*;

public class SparkContext implements Serializable {

    private final SparkConf sparConf; // this is not serializable
    private final JavaSparkContext sparkContext; // this is not either

    protected SparkContext(String appName, String master) {
        this.sparConf = new SparkConf();
        this.sparConf.setAppName(appName);
        this.sparConf.setMaster(master);

        this.sparkContext = new JavaSparkContext(sparConf);
    }

    protected JavaRDD<String> textFile(String path) {
        return sparkContext.textFile(path);
    }

}

TopK class:

public class TopK implements QueryCalculator, Serializable {

    private final transient SparkContext sparkContext;
    .
    .
    .
}

Example that throws Task not serializable exception. getBiggestPointByXDimension won't even get entered because in order for it to be executed in a reduce function the class enclosing it (TopK) must be serializable.

private Point findMedianPoint(JavaRDD<Point> points) {
    Point biggestPointByXDimension = points.reduce((a, b) -> getBiggestPointByXDimension(a, b));
    .
    .
    .
}

private Point getBiggestPointByXDimension(Point first, Point second) {
        return first.getX() > second.getX() ? first : second;
    }

回答1:


To your question: Should I keep the SparkContext variable as transient?

Yes. That's ok. It's only encapsulating the (Java)SparkContext and the context is not usable on the workers, so marking it transient just tells the Serializer not to serialize that field.

You could also have your own SparkContext wrapper not serializable and mark it as transient - same effect as above. (BTW, Given that SparkContext is the Scala class name for the spark context, I'd chose another name to avoid confusion down the road.)

One more thing: As you pointed out, the reason why Spark is trying to serialize the complete enclosing class, is because a method of the class is being used within a closure. Avoid that!. Use an anonymous class or a self-contained closure (which will translate into an anonymous class at the end).



来源:https://stackoverflow.com/questions/27077247/should-i-leave-the-variable-as-transient

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!