Apache Spark + Parquet not Respecting Configuration to use “Partitioned” Staging S3A Committer

妖精的绣舞 提交于 2021-02-11 12:31:30

问题


I am writing partitioned data (Parquet file) to AWS S3 using Apache Spark (3.0) from my local machine without having Hadoop installed in my machine. I was getting FileNotFoundException while writing to S3 when I have lot of files to write to around 50 partitions(partitionBy = date).

Then I have come across new S3A committer, So I tried to configure "partitioned" committer instead. But still I could see that Spark uses ParquetOutputCommitter instead of PartitionedStagingCommitter when the file format is "parquet". And still I am getting FileNotFoundException when I have lot of data to write.

My Configuration:

        sparkSession.conf().set("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.name", "partitioned");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.magic.enabled ", false);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.unique-filenames", true);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", true);
        sparkSession.conf().set("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory");
        sparkSession.conf().set("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol");
        sparkSession.conf().set("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.tmp.path", "tmp/staging");

What am I doing incorrect? Could someone please help?

Note: I have created a JIRA in Spark for the same but no help till now: SPARK-31072

==============================================================

I tried the answer from (@Rajadayalan). But its still using FileOutputFormatter. I tried downgrading the spark version to 2.4.5 without any luck.

20/04/06 12:44:52 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 12:44:52 WARN AbstractS3ACommitterFactory: **Using standard FileOutputCommitter to commit work**. This is slow and potentially unsafe.
20/04/06 12:44:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 2
20/04/06 12:44:52 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 12:44:52 INFO AbstractS3ACommitterFactory: Using Commmitter FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406124452_0000}; taskId=attempt_20200406124452_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@61deb03f}; outputPath=s3a://******/observation, workPath=s3a://******/observation/_temporary/0/_temporary/attempt_20200406124452_0000_m_000000_0, algorithmVersion=2, skipCleanup=false, ignoreCleanupFailures=false} for s3a://********/observation
20/04/06 12:44:53 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 81.077046 ms
20/04/06 12:44:54 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 31.993775 ms
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 9.967359 ms

Note: I don't have Spark installed in my local. So gave spark-hadoop-cloud_2.11 as compile time dependency My build.gradle looks as follows:

    compile group: 'org.apache.spark', name: 'spark-hadoop-cloud_2.11', version: '2.4.2.3.1.3.0-79'
    compile group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
    compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.10.0'
    // https://mvnrepository.com/artifact/org.apache.parquet/parquet-column
    compile group: 'org.apache.parquet', name: 'parquet-column', version: '1.10.1'
    // https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop
    compile group: 'org.apache.parquet', name: 'parquet-hadoop', version: '1.10.1'
    compile group: 'org.apache.parquet', name: 'parquet-avro', version: '1.10.1'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-sketch
    compile group: 'org.apache.spark', name: 'spark-sketch_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-core
    compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst
    compile group: 'org.apache.spark', name: 'spark-catalyst_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-tags
    compile group: 'org.apache.spark', name: 'spark-tags_2.11', version: '2.4.5'
    compile group: 'org.apache.spark', name: 'spark-avro_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-hive
    compile group: 'org.apache.spark', name: 'spark-hive_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.xbean/xbean-asm6-shaded
    compile group: 'org.apache.xbean', name: 'xbean-asm7-shaded', version: '4.15'
   compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.2.1'
//    compile group: 'org.apache.hadoop', name: 'hadoop-s3guard', version: '3.2.1'
    compile group: 'org.apache.hadoop', name: 'hadoop-aws', version: '3.2.1'
    compile group: 'org.apache.hadoop', name: 'hadoop-client', version: '3.2.1'
    compile group: 'com.amazonaws', name: 'aws-java-sdk-bundle', version: '1.11.271'

回答1:


Had the same issue, the solution from How To Get Local Spark on AWS to Write to S3 worked to load PartitionedStagingCommitter. you also have to download spark-hadoop-cloud jar from as mentioned in the solution.

I also use spark 3.0 and this version of jar worked https://repo.hortonworks.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.2.3.1.3.0-79/

Settings in my spark-defaults.conf

spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.hadoop.fs.s3a.committer.name                           partitioned
spark.hadoop.fs.s3a.committer.magic.enabled                  false
spark.hadoop.fs.s3a.commiter.staging.conflict-mode           append
spark.hadoop.fs.s3a.committer.staging.unique-filenames       true
spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads  true
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a    
org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
spark.sql.sources.commitProtocolClass                        
org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
spark.sql.parquet.output.committer.class                     
org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter



回答2:


I got this working with a small change from what @Rajadayalan suggested. Apart from sparkSession.config().set() as in the initial question I have added the option() param in the df while writing parquet files

 df.distinct()
               .withColumn("date", date_format(col(EFFECTIVE_PERIOD_START), "yyyy-MM-dd"))
               .repartition(col("date"))
               .write()
               .format(fileFormat)
               .partitionBy("date")
               .mode(SaveMode.Append)
               .option("fs.s3a.committer.name", "partitioned")
               .option("fs.s3a.committer.staging.conflict-mode", "append")
               .option("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
               .option("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
               .option("compression", compressionCodecName.name().toLowerCase())
               .save(DOWNLOADS_NON_COMPACT_PATH);

This makes the difference and the following stacktrace depicts that its using PartitionedStagingCommitter

Also I could also see that _SUCCESS file is a JSON instead of empty touch file (_SUCCESS) in S3.

20/04/06 14:27:26 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 14:27:26 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/04/06 14:27:26 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using committer partitioned to output data to s3a://************/observation
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using Commmitter PartitionedStagingCommitter{StagingCommitter{AbstractS3ACommitter{role=Task committer attempt_20200406142726_0000_m_000000_0, name=partitioned, outputPath=s3a://*********/observation, workPath=file:/tmp/hadoop-**********/s3a/local-1586197641397/_temporary/0/_temporary/attempt_20200406142726_0000_m_000000_0}, conflictResolution=APPEND, wrappedCommitter=FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406142726_0000}; taskId=attempt_20200406142726_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@4494e88a}; outputPath=file:/Users/**********/Downloads/SparkParquetSample/tmp/staging/**********/local-1586197641397/staging-uploads, workPath=null, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}}} for s3a://parquet-uuid-test/device-metric-observation6
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 52.744811 ms
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 48.78277 ms


来源:https://stackoverflow.com/questions/60955453/apache-spark-parquet-not-respecting-configuration-to-use-partitioned-staging

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!