Batch Size in kafka jdbc sink connector
问题 I want to read only 5000 records in a batch through jdbc sink, for which I've used the batch.size in the jdbc sink config file: name=jdbc-sink connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 batch.size=5000 topics=postgres_users connection.url=jdbc:postgresql://localhost:34771/postgres?user=foo&password=bar file=test.sink.txt auto.create=true But the batch.size has no effect as records are getting inserted into the database when new records are inserted into the source