pyspark Collect causing memory to shoot up 80GB

十年热恋 提交于 2019-12-23 02:01:47

问题


I have a Spark job that reads a CSV file and does a bunch of joins and renaming columns. The file size is in MB

x = info_collect.collect()


x size in python is around 100MB

however I get a memory crash, checking Gangla the memory goes up 80GB. I have no idea why collection 100MB can cause memory to spike like that.

Could someone please advice?

来源:https://stackoverflow.com/questions/52483267/pyspark-collect-causing-memory-to-shoot-up-80gb

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!