I have created a simple Java application that uses Apache Spark to retrieve data from Cassandra, do some transformation on it and save it in another Cassandra table.
I am using Apache Spark 1.4.1 configured in a standalone cluster mode with a single master and slave, located on my machine.
DataFrame customers = sqlContext.cassandraSql("SELECT email, first_name, last_name FROM customer " +
"WHERE CAST(store_id as string) = '" + storeId + "'");
DataFrame customersWhoOrderedTheProduct = sqlContext.cassandraSql("SELECT email FROM customer_bought_product " +
"WHERE CAST(store_id as string) = '" + storeId + "' AND product_id = " + productId + "");
// We need only the customers who did not order the product
// We cache the DataFrame because we use it twice.
DataFrame customersWhoHaventOrderedTheProduct = customers
.join(customersWhoOrderedTheProduct
.select(customersWhoOrderedTheProduct.col("email")), customers.col("email").equalTo(customersWhoOrderedTheProduct.col("email")), "leftouter")
.where(customersWhoOrderedTheProduct.col("email").isNull())
.drop(customersWhoOrderedTheProduct.col("email"))
.cache();
int numberOfCustomers = (int) customersWhoHaventOrderedTheProduct.count();
Date reportTime = new Date();
// Prepare the Broadcast values. They are used in the map below.
Broadcast<String> bStoreId = sparkContext.broadcast(storeId, classTag(String.class));
Broadcast<String> bReportName = sparkContext.broadcast(MessageBrokerQueue.report_did_not_buy_product.toString(), classTag(String.class));
Broadcast<java.sql.Timestamp> bReportTime = sparkContext.broadcast(new java.sql.Timestamp(reportTime.getTime()), classTag(java.sql.Timestamp.class));
Broadcast<Integer> bNumberOfCustomers = sparkContext.broadcast(numberOfCustomers, classTag(Integer.class));
// Map the customers to a custom class, thus adding new properties.
DataFrame storeCustomerReport = sqlContext.createDataFrame(customersWhoHaventOrderedTheProduct.toJavaRDD()
.map(row -> new StoreCustomerReport(bStoreId.value(), bReportName.getValue(), bReportTime.getValue(), bNumberOfCustomers.getValue(), row.getString(0), row.getString(1), row.getString(2))), StoreCustomerReport.class);
// Save the DataFrame to cassandra
storeCustomerReport.write().mode(SaveMode.Append)
.option("keyspace", "my_keyspace")
.option("table", "my_report")
.format("org.apache.spark.sql.cassandra")
.save();
As you can see I cache
the customersWhoHaventOrderedTheProduct
DataFrame, after that I execute a count
and call toJavaRDD
.
By my calculations these actions should be executed only once. But when I go in the Spark UI for the current job I see the following stages:
As you can see every action is executed twice.
Am I doing something wrong? Is there any setting that I've missed?
Any ideas are greatly appreciated.
EDIT:
After I called System.out.println(storeCustomerReport.toJavaRDD().toDebugString());
This is the debug string:
(200) MapPartitionsRDD[43] at toJavaRDD at DidNotBuyProductReport.java:93 []
| MapPartitionsRDD[42] at createDataFrame at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[41] at map at DidNotBuyProductReport.java:90 []
| MapPartitionsRDD[40] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[39] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[38] at toJavaRDD at DidNotBuyProductReport.java:89 []
| ZippedPartitionsRDD2[37] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[31] at toJavaRDD at DidNotBuyProductReport.java:89 []
| ShuffledRDD[30] at toJavaRDD at DidNotBuyProductReport.java:89 []
+-(2) MapPartitionsRDD[29] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[28] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[27] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[3] at cache at DidNotBuyProductReport.java:76 []
| CassandraTableScanRDD[2] at RDD at CassandraRDD.scala:15 []
| MapPartitionsRDD[36] at toJavaRDD at DidNotBuyProductReport.java:89 []
| ShuffledRDD[35] at toJavaRDD at DidNotBuyProductReport.java:89 []
+-(2) MapPartitionsRDD[34] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[33] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[32] at toJavaRDD at DidNotBuyProductReport.java:89 []
| MapPartitionsRDD[5] at cache at DidNotBuyProductReport.java:76 []
| CassandraTableScanRDD[4] at RDD at CassandraRDD.scala:15 []
EDIT 2:
So after some research combined with trials and errors, I managed to optimize the job.
I created an RDD from customersWhoHaventOrderedTheProduct
and I cache it before I call the count()
action. (I moved the cache from the DataFrame
to the RDD
).
After that I use this RDD
to create the storeCustomerReport
DataFrame
.
JavaRDD<Row> customersWhoHaventOrderedTheProductRdd = customersWhoHaventOrderedTheProduct.javaRDD().cache();
Now the stages look like this:
As you can see the two count
and cache
are now gone, but there are still two 'javaRDD' actions. I have no idea where they are coming from, as I call toJavaRDD
only once in my code.
It looks like you are applying two actions in below code segment
// Map the customers to a custom class, thus adding new properties.
DataFrame storeCustomerReport = sqlContext.createDataFrame(customersWhoHaventOrderedTheProduct.toJavaRDD()
.map(row -> new StoreCustomerReport(bStoreId.value(), bReportName.getValue(), bReportTime.getValue(), bNumberOfCustomers.getValue(), row.getString(0), row.getString(1), row.getString(2))), StoreCustomerReport.class);
// Save the DataFrame to cassandra
storeCustomerReport.write().mode(SaveMode.Append)
.option("keyspace", "my_keyspace")
One at sqlContext.createDataFrame()
and the other at storeCustomerReport.write()
and both of these require customersWhoHaventOrderedTheProduct.toJavaRDD()
.
Persisting the RDD produced by should solve this issue.
JavaRDD cachedRdd = customersWhoHaventOrderedTheProduct.toJavaRDD().persist(StorageLevel.DISK_AND_MEMORY) //Or any other storage level
来源:https://stackoverflow.com/questions/33760644/spark-is-executing-every-single-action-two-times