apache-kafka-connect

Batch Size in kafka jdbc sink connector

﹥>﹥吖頭↗ 提交于 2020-01-25 03:57:21
问题 I want to read only 5000 records in a batch through jdbc sink, for which I've used the batch.size in the jdbc sink config file: name=jdbc-sink connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 batch.size=5000 topics=postgres_users connection.url=jdbc:postgresql://localhost:34771/postgres?user=foo&password=bar file=test.sink.txt auto.create=true But the batch.size has no effect as records are getting inserted into the database when new records are inserted into the source

How to copy or configure kafka connect plugin files?

只谈情不闲聊 提交于 2020-01-25 00:29:13
问题 I have downloaded plugin files from https://www.confluent.io/connector/kafka-connect-cdc-microsoft-sql/, It has three folders lib, etc, doc, manifest.json. etc has connect-avro-docker.properties, mssqlsource.properties, repro.properties. I can add CONNECT_PLUGIN_PATH to lib, but what about these config files? In https://docs.confluent.io/current/connect/userguide.html page they did not give clear instructions on where to copy these files. How to copy or configure kafka connect plugin files

Trying to index kafka topic in Elasticsearch with Kafka Connect

大憨熊 提交于 2020-01-25 00:10:07
问题 I want to index a topic from kafka in avro to elasticsearch format but I have problems with my timestamp field to be recognized by elasticsearch as date format field. I have used the following configuration for the connector. { "name": "es-sink-barchart-10", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "value.converter": "io.confluent.connect.avro.AvroConverter", "key.converter": "io.confluent.connect.avro.AvroConverter", "key.converter

Debezium Connector for RDS Aurora

谁都会走 提交于 2020-01-24 20:57:30
问题 I'm trying to use debezium with rds/aurora/ and I'm thinking which connector to use MySQL connector or there is another connector for Aurora? Also, how can i connect debezium connector(localhost) to remote aws aurora db? Also, is someone using debezium with aurora to share some info. Also, how can i configure debezium to write into different kafka topics the different tables which we are monitoring? Regards 回答1: While creating AWS Aurora instance you must have chosen between Amazon Aurora

How to choose a Key and Offset for a Kafka Producer

▼魔方 西西 提交于 2020-01-24 20:37:06
问题 I'm following here.While following the code. I came up with two Questions Is the Key and offset were the same? According to Google, Offset: A Kafka topic receives messages across a distributed set of partitions where they are stored. Each partition maintains the messages it has received in a sequential order where they are identified by an offset, also known as a position. Seems both are very similar for me. Since offset maintain a unique message in the partition: Producers send records to a

Steps to run kafka connect on windows7?

雨燕双飞 提交于 2020-01-24 01:04:02
问题 I am able to setup and run kafka on windows7 as mentioed, can you please help me with the steps to run the mq-connector or other connector jar on windows. thanks in advance 回答1: I am answering to my own question, Here are the instructions to Setting Up and Running Apache Kafka on Windows7 OS and steps to run the mq-connector jar or any other connector on windows. >>CLASSPATH=<connector-root-directory>/target/kafka-connect-mq-source-0.2-SNAPSHOT-jar-with-dependencies.jar >>bin/connect

Kafka Connect can't find connector

别等时光非礼了梦想. 提交于 2020-01-23 08:29:06
问题 I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error: [2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108) java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are:

kafka-connect-jdbc : SQLException: No suitable driver only when using distributed mode

家住魔仙堡 提交于 2020-01-17 01:24:08
问题 We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ). connect-distributed.properties file- bootstrap.servers=IP1:9092,IP2:9092 group.id=connect-cluster key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.topic=connect-offsets offset.storage.replication.factor=2 config.storage.topic=connect-configs config.storage.replication.factor=2 status

Kafka Connect API error when send default value of STRUCT with both JsonConvertor and AvroConvertor

旧城冷巷雨未停 提交于 2020-01-16 11:59:54
问题 here is the code: SchemaBuilder schemaBuilder = SchemaBuilder.struct() .field("province", SchemaBuilder.STRING_SCHEMA) .field("city", SchemaBuilder.STRING_SCHEMA); Struct defaultValue = new Struct(schemaBuilder) .put("province", "aaa") .put("city", "aaaa"); Schema addressSchema = schemaBuilder.defaultValue(defaultValue).build(); Schema dataSchema = SchemaBuilder.struct().name("personMessage") .field("address", addressSchema).build(); Struct normalValue = new Struct(addressSchema) .put(

Kafka connect - string cannot be casted to struct

梦想的初衷 提交于 2020-01-16 09:12:24
问题 I am doing poc of confluent kafka connect version 5.2.3. We are trying to copy message of topic a file as backup and from this file back to topic when we need it. Topic has Key =string Value=protbuf I am using key.convertor=org.apache.kafka.connect.storgare.StringConvertor value.convertor=com.blueapron.connect.protobuf.ProtobufConvertor value.convertor.protoClassName=<proto class name> Sink config name=test connector.class=FileStreamSink tasks.max=1 file=test.txt topics=testtopic Source