apache-kafka-connect

Kafka connect with kerbrosized db

╄→尐↘猪︶ㄣ 提交于 2019-12-02 17:54:09
问题 I am trying to use kafka connect as sink. Our db are kerberized, I am not able to figure out how we can specify kerberos in the connection,url rather than user pass when i try to form the connection.url as jdbc:db2://<host>:<port>/database;userKerberos=true it fails saying invalid url. My question is how we can use kerberos in kafka connect rather than specifying user and pass? Other question is when I give even user/pass it throw below error A communication error occurred on the connection's

Is it possible in Debezium to configure table_name => kafka topic mapping?

坚强是说给别人听的谎言 提交于 2019-12-02 11:53:11
I've read http://debezium.io/docs/connectors/mysql/ but I could not find any info about whether debezium can be configured so that changes from 2 (or more) tables could be written to the same, single kafka topic? It seems to me that it is always 1 table -> 1 topic. Yes, use Single Message Transforms , per the link you identified. You can use regular expressions (regex) to map the tables to the topic required. Both io.debezium.transforms.ByLogicalTableRouter or org.apache.kafka.connect.transforms.RegexRouter should do the trick. There's an example of the latter in this post here : "transforms":

Apache Kafka JDBC Connector - SerializationException: Unknown magic byte

余生颓废 提交于 2019-12-02 11:34:05
We are trying to write back the values from a topic to a postgres database using the Confluent JDBC Sink Connector. connector.class=io.confluent.connect.jdbc.JdbcSinkConnector connection.password=xxx tasks.max=1 topics=topic_name auto.evolve=true connection.user=confluent_rw auto.create=true connection.url=jdbc:postgresql://x.x.x.x:5432/Datawarehouse value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 We can read the value

Apache Kafka JDBC Connector - SerializationException: Unknown magic byte

可紊 提交于 2019-12-02 10:45:00
问题 We are trying to write back the values from a topic to a postgres database using the Confluent JDBC Sink Connector. connector.class=io.confluent.connect.jdbc.JdbcSinkConnector connection.password=xxx tasks.max=1 topics=topic_name auto.evolve=true connection.user=confluent_rw auto.create=true connection.url=jdbc:postgresql://x.x.x.x:5432/Datawarehouse value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 key.converter=io.confluent

Kafka connect with kerbrosized db

心已入冬 提交于 2019-12-02 09:03:33
I am trying to use kafka connect as sink. Our db are kerberized, I am not able to figure out how we can specify kerberos in the connection,url rather than user pass when i try to form the connection.url as jdbc:db2://<host>:<port>/database;userKerberos=true it fails saying invalid url. My question is how we can use kerberos in kafka connect rather than specifying user and pass? Other question is when I give even user/pass it throw below error A communication error occurred on the connection's underlysing socker. Connection reset ERRORCODE=-4499, SQLSTATE=08001 来源: https://stackoverflow.com

Kafka Elasticsearch Connector Timestamps

梦想与她 提交于 2019-12-02 05:04:35
问题 I can see this has been discussed a few times here for instance but I think the solutions are out of date due to breaking changes in Elasticsearch. I'm trying to convert a long/epoch field in my Json in my Kafka topic to an Elasticsearch date type which is pushed through the connector. When I try to add a dynamic mapping, my Kafka connect updates fail because Im trying to apply two mappings to a field, _doc and kafkaconnect. This was a breaking change around version 6 I believe where you can

kafka connect - jdbc sink sql exception

守給你的承諾、 提交于 2019-12-02 02:27:57
问题 I am using the confluent community edition for a simple setup consisting a rest client calling the Kafka rest proxy and then pushing that data into an oracle database using the provided jdbc sink connector. I noticed that if there is an sql exception for instance if the actual data's length is greater than the actual one (column's length defined), the task stopped and if I do restart it, same thing it tries to insert the erroneous entry and it stopped. It does not insert the other entries. Is

Kafka Elasticsearch Connector Timestamps

我的梦境 提交于 2019-12-02 01:53:52
I can see this has been discussed a few times here for instance but I think the solutions are out of date due to breaking changes in Elasticsearch. I'm trying to convert a long/epoch field in my Json in my Kafka topic to an Elasticsearch date type which is pushed through the connector. When I try to add a dynamic mapping, my Kafka connect updates fail because Im trying to apply two mappings to a field, _doc and kafkaconnect. This was a breaking change around version 6 I believe where you can only have one mapping per index. { "index_patterns": [ "depart_details" ], "mappings": { "dynamic

Camus Migration - Kafka HDFS Connect does not start from the set offset

允我心安 提交于 2019-12-01 11:07:31
问题 I am currently using Confluent HDFS Sink Connector (v4.0.0) to replace Camus. We are dealing with sensitive data so we need to maintain consistency in offset during cutover to connectors. Cutover plan: We created hdfs sink connector and subscribed to a topic which writes to a temporary hdfs file. This creates a consumer group with name connect- Stopped the connector using DELETE request. Using /usr/bin/kafka-consumer-groups script, I am able to set the connector consumer group kafka topic

Connect consumers jobs are getting deleted when restarting the cluster

点点圈 提交于 2019-12-01 11:03:00
问题 I am facing the below issue on changing some properties related to kafka and re-starting the cluster. In kafka Consumer, there were 5 consumer jobs are running . If we make some important property change , and on restarting cluster some/all the existing consumer jobs are not able to start. Ideally all the consumer jobs should start , since it will take the meta-data info from the below System-topics . config.storage.topic offset.storage.topic status.storage.topic 回答1: First, a bit of