apache-kafka-connect

Kafka connect is sending a malformed json

自古美人都是妖i 提交于 2020-01-16 08:59:42
问题 I'm trying to perform a proof of concept using kafka-connect with a rabbitMQ connector. Basically, I have two simple spring boot applications; a RabbitMQ producer and a Kafka consumer. The consumer can not handle the messages from the connector because it's transforming somehow my JSON message; RabbitMQ sends {"transaction": "PAYMENT", "amount": "$125.0"} and kafka-connect prints X{"transaction": "PAYMENT", "amount": "$125.0"} . Please note the X at the beginning. If I add a field, let's say

Kafka-MongoDB Debezium Connector : distributed mode

a 夏天 提交于 2020-01-16 08:44:43
问题 I am working on debezium mongodb source connector. Can I run connector in local machine in distributed mode by giving kafka bootstrap server address as remote machine (deployed in Kubernetes) and remote MongoDB url? I tried this and I see connector starts successfully, no errors, just few warnings but no data is flowing from mongodb. Using below command to run connector ./bin/connect-distributed ./etc/schema-registry/connect-avro-distributed.properties ./etc/kafka/connect-mongodb-source

Debezium mongodb connector 0.3.6 automatically stops tailing mongodb , until restarted at-least once

大城市里の小女人 提交于 2020-01-15 04:00:27
问题 I am using debezium mongodb connector 0.3.6 , running inside a docker container. I have been monitoring kafka-connect for some time ,and found that the connector stops tailing mongodb and sending change events to kafka brokers automatically. Upon investigation , I found that sometimes, after some time of inactivity, it's mongo connection is refused, and upon retrial, it connects successfully ,and sends the large number of records that it has not sent during inactive period. But this is not

Kafka jdbc sink connector with json schema not working

末鹿安然 提交于 2020-01-13 19:13:10
问题 Using the latest kafka and confluent jdbc sink connectors. Sending a really simple Json message: { "schema": { "type": "struct", "fields": [ { "type": "int", "optional": false, "field": "id" }, { "type": "string", "optional": true, "field": "msg" } ], "optional": false, "name": "msgschema" }, "payload": { "id": 222, "msg": "hi" } } But getting error: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain

Kafka jdbc sink connector with json schema not working

痴心易碎 提交于 2020-01-13 19:12:11
问题 Using the latest kafka and confluent jdbc sink connectors. Sending a really simple Json message: { "schema": { "type": "struct", "fields": [ { "type": "int", "optional": false, "field": "id" }, { "type": "string", "optional": true, "field": "msg" } ], "optional": false, "name": "msgschema" }, "payload": { "id": 222, "msg": "hi" } } But getting error: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain

How to sink kafka topic to oracle using kafka connect?

半腔热情 提交于 2020-01-13 06:45:51
问题 I have a kafka topic with data, following is config file I am using to sink data to oracle. Sink.properties name=ora_sink_task connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 topics=person connection.url=jdbc:oracle:thin:@127.0.0.1:1521/XE connection.user=kafka connection.password=kafka auto.create=true insert.mode=upsert pk.mode=record_value pk.fields=id I am getting following response in logs. [2017-06-06 21:09:33,557] DEBUG Scavenging sessions at 1496504373557 (org

Kafka connect setup to send record from Aurora using AWS MSK

家住魔仙堡 提交于 2020-01-12 11:09:55
问题 I have to send records from Aurora/Mysql to MSK and from there to Elastic search service Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->Elastic search The record in Aurora table structure is something like this I think record will go to AWS MSK in this format. "o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,

Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector

六眼飞鱼酱① 提交于 2020-01-06 08:06:33
问题 I have MSK running on aws and i am able to send records in and out from MSK . I just wanted to use Kafka connect so that records coming into MSK will go to Elastic Search . I have done below things but i am not sure if my connector is working properly or not ,Because i can not see records into Elastic Search This is the records that i am sending { "data": { "RequestID": 517082653, "ContentTypeID": 9, "OrgID": 16145, "UserID": 4, "PromotionStartDateTime": "2019-12-14T16:06:21Z",

Debezium from MySQL to Postgres with JDBC Sink - change of transforms.route.replacement gives a SinkRecordField error

最后都变了- 提交于 2020-01-06 08:05:23
问题 I am using this debezium-examples source.json { "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "mysql", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "database.server.name": "dbserver1", "database.whitelist": "inventory", "database.history.kafka.bootstrap.servers": "kafka:9092", "database.history.kafka.topic": "schema-changes

Why Kafka jdbc connect insert data as BLOB instead of varchar

雨燕双飞 提交于 2020-01-06 07:01:18
问题 I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code. package producer.serialized.avro; import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import java.util