问题
I would like to perform a Factor Analysis by using dplyr::collect() in Databricks but because of its size I am getting this error:
Error : org.apache.spark.sql.execution.OutOfMemorySparkException: Total memory usage during row decode exceeds spark.driver.maxResultSize (4.0 GB). The average row size was 82.0 B
Is there a function in sparklyr using which I can do this analysis without collecting the data?
来源:https://stackoverflow.com/questions/64113459/factor-analysis-using-sparklyr-in-databricks