问题
i have two question
1) I want to use Kafka with Google cloud Dataflow Pipeline program. in my pipeline program I want to read data from kafka is it possible?
2) I created Instance with BigQuery enabled now i want to enable Pubsub how can i do ?
回答1:
(1) Ad mentioned by Raghu, support for writing to/reading from Kafka was added to Apache Beam in mid-2016 with the KafkaIO
package. You can check the package's documentation[1] to see how to use it.
(2) I'm not quite sure what you mean. Can you provide more details?
[1] https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/kafka/KafkaIO.html
回答2:
Kafka support was added to Dataflow (and Apache Beam) in mid 2016. You can read and write to Kafka streaming pipelines. See JavaDoc for KafkaIO
in Apache Beam.
回答3:
(2) As of April 27, 2015, you can enable Cloud Pub/Sub API as follows:
- Go to your project page on the Developer Console
- Click
APIs & auth
->APIs
- Click
More
withinGoogle Cloud APIs
- Click
Cloud Pub/Sub API
- Click
Enable API
来源:https://stackoverflow.com/questions/29893342/is-it-possible-to-use-kafka-with-google-cloud-dataflow