BigQuery connector for pyspark via Hadoop Input Format example

后端 未结 1 1783
没有蜡笔的小新
没有蜡笔的小新 2020-12-30 07:56

I have a large dataset stored into a BigQuery table and I would like to load it into a pypark RDD for ETL data processing.

I realized that BigQuery supports the Hado

相关标签:
1条回答
  • 2020-12-30 08:21

    Google now has an example on how to use the BigQuery connector with Spark.

    There does seem to be a problem using the GsonBigQueryInputFormat, but I got a simple Shakespeare word counting example working

    import json
    import pyspark
    sc = pyspark.SparkContext()
    
    hadoopConf=sc._jsc.hadoopConfiguration()
    hadoopConf.get("fs.gs.system.bucket")
    
    conf = {"mapred.bq.project.id": "<project_id>", "mapred.bq.gcs.bucket": "<bucket>", "mapred.bq.input.project.id": "publicdata", "mapred.bq.input.dataset.id":"samples", "mapred.bq.input.table.id": "shakespeare"  }
    
    tableData = sc.newAPIHadoopRDD("com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat", "org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject", conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"], int(x["word_count"]))).reduceByKey(lambda x,y: x+y)
    print tableData.take(10)
    
    0 讨论(0)
提交回复
热议问题