How to transform structured streams with PySpark?

后端 未结 2 1281
离开以前
离开以前 2020-12-10 18:46

This seems like it should be obvious, but in reviewing the docs and examples, I\'m not sure I can find a way to take a structured stream and transform using PySpark.

相关标签:
2条回答
  • 2020-12-10 19:19

    Another way for a specific column (column_name):

    from pyspark.sql.functions import udf
    from pyspark.sql.types import StringType
    
    def to_uper(string):
        return string.upper()
    
    to_upper_udf = udf(to_upper,StringType())
    
    records = raw_records.withColumn("new_column_name"
                          ,to_upper_udf(raw_records['column_name']))\
                          .drop("column_name")
    
    
    0 讨论(0)
  • 2020-12-10 19:28

    Every transformation that is applied in Structured Streaming has to be fully contained in Dataset world - in case of PySpark it means you can use only DataFrame or SQL and conversion to RDD (or DStream or local collections) are not supported.

    If you want to use plain Python code you have to use UserDefinedFunction.

    from pyspark.sql.functions import udf
    
    @udf
    def to_upper(s)
        return s.upper()
    
    raw_records.select(to_upper("value"))
    

    See also Spark Structured Streaming and Spark-Ml Regression

    0 讨论(0)
提交回复
热议问题