apache-kafka-connect

Kafka to hdfs sink Missing required configuration “confluent.topic.bootstrap.servers” which has no default value

纵然是瞬间 提交于 2020-06-23 16:45:35
问题 Status My HDFS was installed via ambari, HDP. I'm Currently trying to load kafka topics into HDFS sink. Kafka and HDFS was installed in the same machine x.x.x.x. I didn't change much stuff from the default settings, except some port that according to my needs. Here is how i execute kafka: /usr/hdp/3.1.4.0-315/kafka/bin/connect-standalone.sh /etc/kafka/connect-standalone.properties /etc/kafka-connect-hdfs/quickstart-hdfs.properties Inside connect-standalone.properties bootstrap.servers=x.x.x.x

Kafka to hdfs sink Missing required configuration “confluent.topic.bootstrap.servers” which has no default value

只愿长相守 提交于 2020-06-23 16:45:06
问题 Status My HDFS was installed via ambari, HDP. I'm Currently trying to load kafka topics into HDFS sink. Kafka and HDFS was installed in the same machine x.x.x.x. I didn't change much stuff from the default settings, except some port that according to my needs. Here is how i execute kafka: /usr/hdp/3.1.4.0-315/kafka/bin/connect-standalone.sh /etc/kafka/connect-standalone.properties /etc/kafka-connect-hdfs/quickstart-hdfs.properties Inside connect-standalone.properties bootstrap.servers=x.x.x.x

How to set max.poll.records in Kafka-Connect API

戏子无情 提交于 2020-06-18 11:15:11
问题 I am using confluent-3.0.1 platform and building a Kafka-Elasticsearch connector. For this I am extending SinkConnector and SinkTask (Kafka-connect APIs) to get data from Kafka. As part of this code i am extending taskConfigs method of SinkConnector to return "max.poll.records" to fetch only 100 records at a time. But its not working and I am getting all records at same time and I am failing to commit offsets within the stipulated time. Please can any one help me to configure "max.poll

Kafka Connect with a JdbcConnectionSource connector fails to create task (connector is RUNNING but task is not)

不问归期 提交于 2020-06-08 17:49:57
问题 It seems like rather often I create a Kafka Connect connector from the JdbcConnectionSource based on a query, and the connector is created successsfully with status "RUNNING", but no task is created. Looking in the console logs of my container, I see no indication that anything is wrong that I can tell: no errors, no warnings, no explanation of why the task failed. I can get other connectors to work, but sometimes one doesn't. How can one get more information to troubleshoot when a connector

Kafka Connect with a JdbcConnectionSource connector fails to create task (connector is RUNNING but task is not)

馋奶兔 提交于 2020-06-08 17:49:18
问题 It seems like rather often I create a Kafka Connect connector from the JdbcConnectionSource based on a query, and the connector is created successsfully with status "RUNNING", but no task is created. Looking in the console logs of my container, I see no indication that anything is wrong that I can tell: no errors, no warnings, no explanation of why the task failed. I can get other connectors to work, but sometimes one doesn't. How can one get more information to troubleshoot when a connector

Official MongoDB Source Connector for Apache Kafka with MongoDB-4.0

空扰寡人 提交于 2020-06-01 07:36:06
问题 I have a requirement where I have to capture MongoDB ChangeStream (inserts/updates etc) events and take some actions (save to OracleDB). I have thought of this design which seems to be good: "MongoDB-ChangeStream" --> "MongoDB Source Connector for Apache Kafka" --> "Kafka Broker-Topic" --> "Java Service" --> OracleDB My question here is, I am using MongoDB-4.0 and "MongoDB Source Connector for Apache Kafka" was introduced in MongoDB-4.2. Can I still use "MongoDB Source Connector for Apache

Debezium: No maximum LSN recorded in the database; please ensure that the SQL Server Agent is running

て烟熏妆下的殇ゞ 提交于 2020-06-01 06:25:47
问题 This question is related to: Debezium How do I correctly register the SqlServer connector with Kafka Connect - connection refused In Windows 10, I have Debezium running on an instance of Microsoft SQL Server that is outside of a Docker container. I am getting the following warning every 390 milliseconds: No maximum LSN recorded in the database; please ensure that the SQL Server Agent is running [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource] I checked Debezium's code on

kafka connector HTTP/API source

主宰稳场 提交于 2020-05-30 03:33:05
问题 I am actually aware on how to capture data from any data source, such as a specific API (e.g HTTP GET request) and ingest them in specific kafka connector. { "name": "localfileSource", "config": { "connector.class": "FileStreamSourceConnector", "tasks.max": "1", "file": "test.txt", "topic": "connectSource" } } I would need something similar to this( FileStreamSourceConnector ), that can be used with API sources. 回答1: You can always build your own connectors, to stream from http source in a

Creating and using a custom kafka connect configuration provider

做~自己de王妃 提交于 2020-05-29 09:15:47
问题 I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source. That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected. In order to secure this, kafka introduced the new ConfigProvider seen here. This

Creating and using a custom kafka connect configuration provider

纵然是瞬间 提交于 2020-05-29 09:15:07
问题 I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source. That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected. In order to secure this, kafka introduced the new ConfigProvider seen here. This