How to use foreach or foreachBatch in PySpark to write to database?
问题 I want to do Spark Structured Streaming (Spark 2.4.x) from a Kafka source to a MariaDB with Python (PySpark). I want to use the streamed Spark dataframe and not the static nor Pandas dataframe. It seems that one has to use foreach or foreachBatch since there are no possible database sinks for streamed dataframes according to https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks. Here is my try: from pyspark.sql import SparkSession import pyspark.sql