Adding a new column in Data Frame derived from other columns (Spark)

前端 未结 2 1978
没有蜡笔的小新
没有蜡笔的小新 2020-12-16 09:57

I\'m using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,

>>old_df.c         


        
相关标签:
2条回答
  • 2020-12-16 10:24

    Additionally, we can use udf

    from pyspark.sql.functions import udf,col
    from pyspark.sql.types import IntegerType
    from pyspark import SparkContext
    from pyspark.sql import SQLContext
    
    sc = SparkContext()
    sqlContext = SQLContext(sc)
    old_df = sqlContext.createDataFrame(sc.parallelize(
        [(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))
    function = udf(lambda col1, col2 : col1-col2, IntegerType())
    new_df = old_df.withColumn('col_n',function(col('col_1'), col('col_2')))
    new_df.show()
    
    0 讨论(0)
  • 2020-12-16 10:42

    One way to achieve that is to use withColumn method:

    old_df = sqlContext.createDataFrame(sc.parallelize(
        [(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))
    
    new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2)
    

    Alternatively you can use SQL on a registered table:

    old_df.registerTempTable('old_df')
    new_df = sqlContext.sql('SELECT *, col_1 - col_2 AS col_n FROM old_df')
    
    0 讨论(0)
提交回复
热议问题