Creating and using Spark-Hive UDF for Date

坚强是说给别人听的谎言 提交于 2020-05-17 05:54:05

问题


Note: This Quetion is Linked from this Question:Creting UDF function with NonPrimitive Data Type and using in Spark-sql Query: Scala

Hi I have Craeted one method in scala.

    package test.udf.demo
    object UDF_Class {
    def transformDate( dateColumn: String, df: DataFrame) : DataFrame = {
    val sparksession = SparkSession.builder().appName("App").getOrCreate()
    val d=df.withColumn("calculatedCol", month(to_date(from_unixtime(unix_timestamp(col(dateColumn),  "dd-MM-yyyy")))))
    df.withColumn("date1",  when(col("calculatedCol") === "01",  concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1,  lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM- yyyy"))),3,4))
    .when(col("calculatedCol") ===  "02",concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1,  lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM- yyyy"))),3,4)))
    .when(col("calculatedCol") ===  "03",concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1,  lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM-yyyy"))),3,4)))
    .otherwise(concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM-  yyyy")))), lit('-')), substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM-yyyy")))) + 1, 3, 4))))) 
    val d1=sparksession.udf.register("transform",transformDate _)
    d
    }
    }

I want to use this transformDate method in my sparksql query which is seprate scala code in same packege.

    package test.udf.demo
    import test.udf.demo.transformDate
    //sparksession
    sparksession.sql("select id,name,salary,transform(dob) from dbname.tablename")

but this is giving me error "not a temp or permanet registerd function in default database" can someone please guide me.


回答1:


AFAIK Spark user defined udfs can cannot accept or return DataFrame. That is stopping your udf from registration




回答2:


First of all Spark SQL UDF is a Row based function. Not a Dataframe based method. Aggregate UDF also takes a series of Row. So the UDF definition is wrong. If I understood your requirement correctly you want to create a configurable expression of Case statements. It can be easily achieved by expr()

import spark.implicits._
val exprStr = "case when calculatedCol='01' then <here goes your code statements> as FP"
val modifiedDf = sql("""select id,name,salary,$exprStr  from dbname.tablename""")

It will work



来源:https://stackoverflow.com/questions/61662077/creating-and-using-spark-hive-udf-for-date

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!