Is it possible to subclass DataFrame in Pyspark?

后端 未结 1 1628
爱一瞬间的悲伤
爱一瞬间的悲伤 2021-01-05 12:05

The documentation for Pyspark shows DataFrames being constructed from sqlContext, sqlContext.read(), and a variety of other methods.

(See h

相关标签:
1条回答
  • 2021-01-05 12:46

    It really depends on your goals.

    • Technically speaking it is possible. pyspark.sql.DataFrame is just a plain Python class. You can extend it or monkey-patch if you need.

      from pyspark.sql import DataFrame
      
      class DataFrameWithZipWithIndex(DataFrame):
           def __init__(self, df):
               super(self.__class__, self).__init__(df._jdf, df.sql_ctx)
      
           def zipWithIndex(self):
               return (self.rdd
                   .zipWithIndex()
                   .map(lambda row: (row[1], ) + row[0])
                   .toDF(["_idx"] + self.columns))
      

      Example usage:

      df = sc.parallelize([("a", 1)]).toDF(["foo", "bar"])
      
      with_zipwithindex = DataFrameWithZipWithIndex(df)
      
      isinstance(with_zipwithindex, DataFrame)
      
      True
      
      with_zipwithindex.zipWithIndex().show()
      
      +----+---+---+
      |_idx|foo|bar|
      +----+---+---+
      |   0|  a|  1|
      +----+---+---+
      
    • Practically speaking you won't be able to do much here. DataFrame is an thin wrapper around JVM object and doesn't do much beyond providing docstrings, converting arguments to the form required natively, calling JVM methods, and wrapping the results using Python adapters if necessary.

      With plain Python code you won't be able to even go near DataFrame / Dataset internals or modify its core behavior. If you're looking for standalone, Python only Spark DataFrame implementation it is not possible.

    0 讨论(0)
提交回复
热议问题