How to create hive table from Spark data frame, using its schema?

后端 未结 5 1780
渐次进展
渐次进展 2020-12-13 21:45

I want to create a hive table using my Spark dataframe\'s schema. How can I do that?

For fixed columns, I can use:

val CreateTable_query = \"Create T         


        
相关标签:
5条回答
  • 2020-12-13 21:52

    Here is PySpark version to create Hive table from parquet file. You may have generated Parquet files using inferred schema and now want to push definition to Hive metastore. You can also push definition to the system like AWS Glue or AWS Athena and not just to Hive metastore. Here I am using spark.sql to push/create permanent table.

     # Location where my parquet files are present.
     df = spark.read.parquet("s3://my-location/data/")
    
        cols = df.dtypes
        buf = []
        buf.append('CREATE EXTERNAL TABLE test123 (')
        keyanddatatypes =  df.dtypes
        sizeof = len(df.dtypes)
        print ("size----------",sizeof)
        count=1;
        for eachvalue in keyanddatatypes:
            print count,sizeof,eachvalue
            if count == sizeof:
                total = str(eachvalue[0])+str(' ')+str(eachvalue[1])
            else:
                total = str(eachvalue[0]) + str(' ') + str(eachvalue[1]) + str(',')
            buf.append(total)
            count = count + 1
    
        buf.append(' )')
        buf.append(' STORED as parquet ')
        buf.append("LOCATION")
        buf.append("'")
        buf.append('s3://my-location/data/')
        buf.append("'")
        buf.append("'")
        ##partition by pt
        tabledef = ''.join(buf)
    
        print "---------print definition ---------"
        print tabledef
        ## create a table using spark.sql. Assuming you are using spark 2.1+
        spark.sql(tabledef);
    
    0 讨论(0)
  • 2020-12-13 21:52

    From spark 2.4 onward you can use the function dataframe.schema.toDDL to get the column names and type (even for nested struct)

    0 讨论(0)
  • 2020-12-13 21:58

    Another way is to use methods available on StructType.. sql , simpleString, TreeString etc...

    You can create DDLs from a Dataframe's schema, Can create Dataframe's schema from your DDLs ..

    Here is one example - ( Till Spark 2.3)

        // Setup Sample Test Table to create Dataframe from
        spark.sql(""" drop database hive_test cascade""")
        spark.sql(""" create database hive_test""")
        spark.sql("use hive_test")
        spark.sql("""CREATE TABLE hive_test.department(
        department_id int ,
        department_name string
        )    
        """)
        spark.sql("""
        INSERT INTO hive_test.department values ("101","Oncology")    
        """)
    
        spark.sql("SELECT * FROM hive_test.department").show()
    
    /***************************************************************/
    

    Now I have Dataframe to play with. in real cases you'd use Dataframe Readers to create dataframe from files/databases. Let's use it's schema to create DDLs

      // Create DDL from Spark Dataframe Schema using simpleString function
    
     // Regex to remove unwanted characters    
        val sqlrgx = """(struct<)|(>)|(:)""".r
     // Create DDL sql string and remove unwanted characters
    
        val sqlString = sqlrgx.replaceAllIn(spark.table("hive_test.department").schema.simpleString, " ")
    
    // Create Table with sqlString
       spark.sql(s"create table hive_test.department2( $sqlString )")
    

    Spark 2.4 Onwards you can use fromDDL & toDDL methods on StructType -

    val fddl = """
          department_id int ,
          department_name string,
          business_unit string
          """
    
    
        // Easily create StructType from DDL String using fromDDL
        val schema3: StructType = org.apache.spark.sql.types.StructType.fromDDL(fddl)
    
    
        // Create DDL String from StructType using toDDL
        val tddl = schema3.toDDL
    
        spark.sql(s"drop table if exists hive_test.department2 purge")
    
       // Create Table using string tddl
        spark.sql(s"""create table hive_test.department2 ( $tddl )""")
    
        // Test by inserting sample rows and selecting
        spark.sql("""
        INSERT INTO hive_test.department2 values ("101","Oncology","MDACC Texas")    
        """)
        spark.table("hive_test.department2").show()
        spark.sql(s"drop table hive_test.department2")
    
    
    0 讨论(0)
  • 2020-12-13 22:11

    Assuming, you are using Spark 2.1.0 or later and my_DF is your dataframe,

    //get the schema split as string with comma-separated field-datatype pairs
    StructType my_schema = my_DF.schema();
    String columns = Arrays.stream(my_schema.fields())
                           .map(field -> field.name()+" "+field.dataType().typeName())
                           .collect(Collectors.joining(","));
    
    //drop the table if already created
    spark.sql("drop table if exists my_table");
    //create the table using the dataframe schema
    spark.sql("create table my_table(" + columns + ") 
        row format delimited fields terminated by '|' location '/my/hdfs/location'");
        //write the dataframe data to the hdfs location for the created Hive table
        my_DF.write()
        .format("com.databricks.spark.csv")
        .option("delimiter","|")
        .mode("overwrite")
        .save("/my/hdfs/location");
    

    The other method using temp table

    my_DF.createOrReplaceTempView("my_temp_table");
    spark.sql("drop table if exists my_table");
    spark.sql("create table my_table as select * from my_temp_table");
    
    0 讨论(0)
  • 2020-12-13 22:16

    As per your question it looks like you want to create table in hive using your data-frame's schema. But as you are saying you have many columns in that data-frame so there are two options

    • 1st is create direct hive table trough data-frame.
    • 2nd is take schema of this data-frame and create table in hive.

    Consider this code:

    package hive.example
    
    import org.apache.spark.SparkConf
    import org.apache.spark.SparkContext
    import org.apache.spark.sql.SQLContext
    import org.apache.spark.sql.Row
    import org.apache.spark.sql.SparkSession
    
    object checkDFSchema extends App {
      val cc = new SparkConf;
      val sc = new SparkContext(cc)
      val sparkSession = SparkSession.builder().enableHiveSupport().getOrCreate()
      //First option for creating hive table through dataframe 
      val DF = sparkSession.sql("select * from salary")
      DF.createOrReplaceTempView("tempTable")
      sparkSession.sql("Create table yourtable as select * form tempTable")
      //Second option for creating hive table from schema
      val oldDFF = sparkSession.sql("select * from salary")
      //Generate the schema out of dataframe  
      val schema = oldDFF.schema
      //Generate RDD of you data 
      val rowRDD = sc.parallelize(Seq(Row(100, "a", 123)))
      //Creating new DF from data and schema 
      val newDFwithSchema = sparkSession.createDataFrame(rowRDD, schema)
      newDFwithSchema.createOrReplaceTempView("tempTable")
      sparkSession.sql("create table FinalTable AS select * from tempTable")
    }
    
    0 讨论(0)
提交回复
热议问题