How to read streaming data in XML format from Kafka?

笑着哭i 提交于 2019-12-07 06:56:26

问题


I am trying to read XML data from Kafka topic using Spark Structured streaming.

I tried using the Databricks spark-xml package, but I got an error saying that this package does not support streamed reading. Is there any way I can extract XML data from Kafka topic using structured streaming?

My current code:

df = spark \
      .readStream \
      .format("kafka") \
      .format('com.databricks.spark.xml') \
      .options(rowTag="MainElement")\
      .option("kafka.bootstrap.servers", "localhost:9092") \
      .option(subscribeType, "test") \
      .load()

The error:

py4j.protocol.Py4JJavaError: An error occurred while calling o33.load.
: java.lang.UnsupportedOperationException: Data source com.databricks.spark.xml does not support streamed reading
        at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:234)

回答1:


.format("kafka") \
.format('com.databricks.spark.xml') \

The last one with com.databricks.spark.xml wins and becomes the streaming source (hiding Kafka as the source).

In order words, the above is equivalent to .format('com.databricks.spark.xml') alone.

As you may have experienced, the Databricks spark-xml package does not support streaming reading (i.e. cannot act as a streaming source). The package is not for streaming.

Is there any way I can extract XML data from Kafka topic using structured streaming?

You are left with accessing and processing the XML yourself with a standard function or a UDF. There's no built-in support for streaming XML processing in Structured Streaming up to Spark 2.2.0.

That should not be a big deal anyway. A Scala code could look as follows.

val input = spark.
  readStream.
  format("kafka").
  ...
  load

val values = input.select('value cast "string")

val extractValuesFromXML = udf { (xml: String) => ??? }
val numbersFromXML = values.withColumn("number", extractValuesFromXML('value))

// print XMLs and numbers to the stdout
val q = numbersFromXML.
  writeStream.
  format("console").
  start

Another possible solution could be to write your own custom streaming Source that would deal with the XML format in def getBatch(start: Option[Offset], end: Offset): DataFrame. That is supposed to work.




回答2:


import xml.etree.ElementTree as ET
df = spark \
      .readStream \
      .format("kafka") \
      .option("kafka.bootstrap.servers", "localhost:9092") \
      .option(subscribeType, "test") \
      .load()

Then I wrote a python UDF

def parse(s):
  xml = ET.fromstring(s)
  ns = {'real_person': 'http://people.example.com',
      'role': 'http://characters.example.com'}
  actor_el = xml.find("DNmS:actor",ns)

  if(actor_el ):
       actor = actor_el.text
  role_el.find('real_person:role', ns)
  if(role_el):
       role = role_el.text
  return actor+"|"+role

Register this UDF

extractValuesFromXML = udf(parse)

   XML_DF= df .withColumn("mergedCol",extractroot("value"))

   AllCol_DF= xml_DF.withColumn("actorName", split(col("mergedCol"), "\\|").getItem(0))\
        .withColumn("Role", split(col("mergedCol"), "\\|").getItem(1))



回答3:


You cannot mix format this way. Kafka source is loaded as Row including number of values, like key, value and topic, with value column storing payload as a binary type:

Note that the following Kafka params cannot be set and the Kafka source or sink will throw an exception:

...

value.deserializer: Values are always deserialized as byte arrays with ByteArrayDeserializer. Use DataFrame operations to explicitly deserialize the values.

Parsing this content is the user responsibility and cannot be delegated to other data sources. See for example my answer to How to read records in JSON format from Kafka using Structured Streaming?.

For XML you'll likely need an UDF (UserDefinedFunction), although you can try Hive XPath functions first. You should also decode binary data.




回答4:


Using existing libraries,

https://github.com/databricks/spark-xml

& foreachBatch (Spark 2.4+)

inputStream.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) =>

        var parameters = collection.mutable.Map.empty[String, String]
        var schema: StructType = null

        val rdd:RDD[String] = batchDF.as[String].rdd

        val relation = XmlRelation(
          () => rdd,
          None,
          parameters.toMap,
          schema)(spark.sqlContext)

        spark.baseRelationToDataFrame(relation)
          .write.format("parquet")
          .mode("append")
          .saveAsTable("default.catalog_sink")

    }.start()

spark.baseRelationToDataFrame(relation) will return whatever spark-xml would have done in batch mode, you can use sparksql on that dataframe to derive the exact result you need.



来源:https://stackoverflow.com/questions/46004610/how-to-read-streaming-data-in-xml-format-from-kafka

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!