This seems like it should be obvious, but in reviewing the docs and examples, I\'m not sure I can find a way to take a structured stream and transform using PySpark.
Another way for a specific column (column_name):
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
def to_uper(string):
return string.upper()
to_upper_udf = udf(to_upper,StringType())
records = raw_records.withColumn("new_column_name"
,to_upper_udf(raw_records['column_name']))\
.drop("column_name")
Every transformation that is applied in Structured Streaming has to be fully contained in Dataset world - in case of PySpark it means you can use only DataFrame or SQL and conversion to RDD (or DStream or local collections) are not supported.
If you want to use plain Python code you have to use UserDefinedFunction.
from pyspark.sql.functions import udf
@udf
def to_upper(s)
return s.upper()
raw_records.select(to_upper("value"))
See also Spark Structured Streaming and Spark-Ml Regression