问题
I have a Spark job that reads a CSV file and does a bunch of joins and renaming columns. The file size is in MB
x = info_collect.collect()
x size in python is around 100MB
however I get a memory crash, checking Gangla the memory goes up 80GB. I have no idea why collection 100MB can cause memory to spike like that.
Could someone please advice?
来源:https://stackoverflow.com/questions/52483267/pyspark-collect-causing-memory-to-shoot-up-80gb