java.lang.OutOfMemoryError: Unable to acquire 100 bytes of memory, got 0

前端 未结 5 2052
眼角桃花
眼角桃花 2020-12-15 06:00

I\'m invoking Pyspark with Spark 2.0 in local mode with the following command:

pyspark --executor-memory 4g --driver-memory 4g

The input da

相关标签:
5条回答
  • 2020-12-15 06:20

    In my case the driver was smaller than the workers. Issue was resolved by making the driver larger.

    0 讨论(0)
  • 2020-12-15 06:26

    The problem for me was indeed coalesce(). What I did was exporting the file not using coalesce() but parquet instead using df.write.parquet("testP"). Then read back the file and export that with coalesce(1).

    Hopefully it works for you as well.

    0 讨论(0)
  • 2020-12-15 06:26

    In my case replacing the coalesce(1) with repartition(1) Worked.

    0 讨论(0)
  • 2020-12-15 06:30

    I believe that the cause of this problem is coalesce(), which despite the fact that it avoids a full shuffle (like repartition would do), it has to shrink the data in the requested number of partitions.

    Here, you are requesting all the data to fit into one partition, thus one task (and only one task) has to work with all the data, which may cause its container to suffer from memory limitations.

    So, either ask for more partitions than 1, or avoid coalesce() in this case.


    Otherwise, you could try the solutions provided in the links below, for increasing your memory configurations:

    1. Spark java.lang.OutOfMemoryError: Java heap space
    2. Spark runs out of memory when grouping by key
    0 讨论(0)
  • 2020-12-15 06:38

    As was stated in other answers, use repartition(1) instead of coalesce(1). The reason is that repartition(1) will ensure that upstream processing is done in parallel (multiple tasks/partitions), rather than on only one executor.

    To quote the Dataset.coalesce() Spark docs:

    However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can call repartition(1) instead. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

    0 讨论(0)
提交回复
热议问题