apache-kafka-connect

How to import MS Sql Server tables to KSQL with Kafka connect

主宰稳场 提交于 2019-12-01 09:51:50
问题 Hi I am trying to import all tables present on remote SQL Server to KSQL topics this is my file properties connector.class=io.confluent.connect.cdc.mssql.MsSqlSourceConnector name=sqlservertest tasks.max=1 initial.database=$$DATABASE connection.url=jdbc:sqlserver://$$IP:1433;databaseName=$$DATABASE;user=$$USER; username=$$USER password=$$PASS server.name=$$IP server.port=1433 topic.prefix=sqlservertest key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url

Delete events from JDBC Kafka Connect Source

非 Y 不嫁゛ 提交于 2019-11-30 20:44:20
I am playing around with the Kafka Connect JDBC connector and specifically looking at what the actual format of the data that is put onto the topic is. I have been able to see new inserts and updates to the database, but I have not been able to detect deletes from the database. First: Does the JDBC source support detecting these changes? I can't find documentation one way or another. If it does, what format does it take on the actual topic? The Confluent JDBC source connector is able to capture "soft deletes", where the "deleted" rows are simply marked as such by your application but are not

Kafka Connect JDBC sink connector not working

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-30 10:16:58
I am trying to use Kafka Connect JDBC sink connector to insert data into Oracle but it is throwing an error . I have tried with all the possible configurations of the schema. Below is the examples . Please suggest if I am missing anything below are my configurations files and errors. Case 1- First Configuration internal.value.converter.schemas.enable=false . so I am getting the [2017-08-28 16:16:26,119] INFO Sink task WorkerSinkTask{id=oracle_sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:233) [2017-08-28 16:16:26,606] INFO Discovered coordinator dfw

How to use from_json with Kafka connect 0.10 and Spark Structured Streaming?

北战南征 提交于 2019-11-30 09:02:14
I was trying to reproduce the example from [Databricks][1] and apply it to the new connector to Kafka and spark structured streaming however I cannot parse the JSON correctly using the out-of-the-box methods in Spark... note: the topic is written into Kafka in JSON format. val ds1 = spark .readStream .format("kafka") .option("kafka.bootstrap.servers", IP + ":9092") .option("zookeeper.connect", IP + ":2181") .option("subscribe", TOPIC) .option("startingOffsets", "earliest") .option("max.poll.records", 10) .option("failOnDataLoss", false) .load() The following code won't work, I believe that's

Kafka Connect running out of heap space

不问归期 提交于 2019-11-30 08:33:35
问题 After starting Kafka Connect ( connect-standalone ), my task fails immediately after starting with: java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) at org.apache.kafka.common.network.KafkaChannel.receive

Make Kafka Topic Log Retention Permanent

拟墨画扇 提交于 2019-11-30 02:51:10
问题 I am writing log messages into a Kafka Topic and I want the retention of this topic to be permanent. I have seen in Kafka and Kafka Connect (_schemas, connect-configs, connect-status, connect-offsets, etc) that there are special topics that are not deleted by the log retention time. How do I enforce a topic to be like these other special topics? Is it the naming convention or some other properties? Thanks 回答1: These special topics are compacted topics. This means they are made up of keyed

Kafka Connect JDBC sink connector not working

夙愿已清 提交于 2019-11-29 15:17:58
问题 I am trying to use Kafka Connect JDBC sink connector to insert data into Oracle but it is throwing an error . I have tried with all the possible configurations of the schema. Below is the examples . Please suggest if I am missing anything below are my configurations files and errors. Case 1- First Configuration internal.value.converter.schemas.enable=false . so I am getting the [2017-08-28 16:16:26,119] INFO Sink task WorkerSinkTask{id=oracle_sink-0} finished initialization and start (org

How to use from_json with Kafka connect 0.10 and Spark Structured Streaming?

旧街凉风 提交于 2019-11-27 15:00:43
问题 I was trying to reproduce the example from [Databricks][1] and apply it to the new connector to Kafka and spark structured streaming however I cannot parse the JSON correctly using the out-of-the-box methods in Spark... note: the topic is written into Kafka in JSON format. val ds1 = spark .readStream .format("kafka") .option("kafka.bootstrap.servers", IP + ":9092") .option("zookeeper.connect", IP + ":2181") .option("subscribe", TOPIC) .option("startingOffsets", "earliest") .option("max.poll