About how to add a new column to an existing DataFrame with random values in Scala

前端 未结 2 1821
再見小時候
再見小時候 2020-11-27 08:51

i have a dataframe with a parquet file and I have to add a new column with some random data, but I need that random data different each other. This is my actual code and the

2条回答
  •  暗喜
    暗喜 (楼主)
    2020-11-27 09:05

    You can make use of monotonically_increasing_id to generate random values.

    Then you can define a UDF to append any string to it after casting it to String as monotonically_increasing_id returns Long by default.

    scala> var df = Seq(("Ron"), ("John"), ("Steve"), ("Brawn"), ("Rock"), ("Rick")).toDF("names")
    +-----+
    |names|
    +-----+
    |  Ron|
    | John|
    |Steve|
    |Brawn|
    | Rock|
    | Rick|
    +-----+
    
    scala> val appendD = spark.sqlContext.udf.register("appendD", (s: String) => s.concat("D"))
    
    scala> df = df.withColumn("ID",monotonically_increasing_id).selectExpr("names","cast(ID as String) ID").withColumn("ID",appendD($"ID"))
    +-----+---+
    |names| ID|
    +-----+---+
    |  Ron| 0D|
    | John| 1D|
    |Steve| 2D|
    |Brawn| 3D|
    | Rock| 4D|
    | Rick| 5D|
    +-----+---+
    

提交回复
热议问题