How to read BigQuery table using python pipeline code in GCP Dataflow

巧了我就是萌 提交于 2019-12-10 17:16:57

问题


Could someone please share syntax to read/write bigquery table in a pipeline written in python for GCP Dataflow


回答1:


Run on Dataflow

First, construct a Pipeline with the following options for it to run on GCP DataFlow:

import apache_beam as beam

options = {'project': <project>,
           'runner': 'DataflowRunner',
           'region': <region>,
           'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)

Read from BigQuery

Define a BigQuerySource with your query and use beam.io.Read to read data from BQ:

BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)

Write to BigQuery

There are two options to write to bigquery:

  • use a BigQuerySink and beam.io.Write:

    BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>)
    BQ_data | beam.io.Write(BQ_sink)
    
  • use beam.io.WriteToBigQuery:

    BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
    



回答2:


Reading from Bigquery

rows = (p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(query=QUERY, use_standard_sql=True))

writing to Bigquery

rows | 'writeToBQ' >> beam.io.Write(
beam.io.BigQuerySink('{}:{}.{}'.format(PROJECT, BQ_DATASET_ID, BQ_TEST), schema='CONVERSATION:STRING, LEAD_ID:INTEGER', create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
    write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))


来源:https://stackoverflow.com/questions/48386148/how-to-read-bigquery-table-using-python-pipeline-code-in-gcp-dataflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!