How to use spark_apply to change NaN values?
问题 After using sdf_pivot I was left with a huge number of NaN values, so in order to proceed with my analysis I need to replace the NaN with 0, I have tried using this: data <- data %>% spark_apply(function(e) ifelse(is.nan(e),0,e)) And this gererates the following error: Error in file(con, "r") : cannot open the connection In addition: Warning message: In file(con, "r") : cannot open file 'C:\.........\file18dc5a1c212e_spark.log':Permission denied I'm using Spark 2.2.0 and the latest version of