问题
Using the latest kafka and confluent jdbc sink connectors. Sending a really simple Json message:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": true,
"field": "msg"
}
],
"optional": false,
"name": "msgschema"
},
"payload": {
"id": 222,
"msg": "hi"
}
}
But getting error:
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
Jsonlint says the Json is valid. I have kept json schemas.enable=true
in kafka configuration. Any pointers?
回答1:
You need to tell Connect that your schema is embedded in the JSON you're using.
You have:
value.converter=org.apache.kafka.connect.json.JsonConverter
But need also:
value.converter.schemas.enable=true
回答2:
In order to use the JDBC sink, your streamed messages must have a schema. This can be achieved either by using Avro with Schema Registry, or by using JSON with schemas. You might need to delete the topic, re-run sink and then start source side once again if schemas.enable=true
has been configured after initially running the source properties file.
Example:
sink.properties
file
name=sink-mysql
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=test-mysql-jdbc-foobar
connection.url=jdbc:mysql://127.0.0.1:3306/demo?user=user1&password=user1pass
auto.create=true
and an example worker configuration file connect-avro-standalone.properties
:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# Local storage file for offset data
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=share/java
and execute
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/sink.properties
来源:https://stackoverflow.com/questions/49022120/kafka-jdbc-sink-connector-with-json-schema-not-working