Evolving a schema with Spark DataFrame

前端 未结 2 353
太阳男子
太阳男子 2020-12-07 04:08

I\'m working with a Spark dataframe which could be loading data from one of three different schema versions:

// Original
{ \"A\": {\"B\": 1 } }
// Addition \         


        
相关标签:
2条回答
  • 2020-12-07 04:28

    zero323 has answer the question, but in Scala. This is the same thing but in Java.

    public void evolvingSchema() {
        String versionOne = "{ \"A\": {\"B\": 1 } }";
        String versionTwo = "{ \"A\": {\"B\": 1 }, \"C\": 2 }";
        String versionThree = "{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }";
    
        process(spark.getContext(), "1", versionOne);
        process(spark.getContext(), "2", versionTwo);
        process(spark.getContext(), "2", versionThree);
    }
    
    private static void process(JavaSparkContext sc, String version, String data) {
        StructType schema = DataTypes.createStructType(Arrays.asList(
                DataTypes.createStructField("A",
                        DataTypes.createStructType(Arrays.asList(
                                DataTypes.createStructField("B", DataTypes.LongType, true),
                        DataTypes.createStructField("D", DataTypes.LongType, true))), true),
                DataTypes.createStructField("C", DataTypes.LongType, true)));
    
        SQLContext sqlContext = new SQLContext(sc);
        DataFrame df = sqlContext.read().schema(schema).json(sc.parallelize(Arrays.asList(data)));
    
        try {
            df.select("C").collect();
        } catch(Exception e) {
            System.out.println("Failed to C for " + version);
        }
        try {
            df.select("A.D").collect();
        } catch(Exception e) {
            System.out.println("Failed to A.D for " + version);
        }
    }
    
    0 讨论(0)
  • 2020-12-07 04:40

    JSON sources are not very well suited for data with evolving schema (how about Avro or Parquet instead) but the simple solution is to use the same schema for all sources and make new fields optional / nullable:

    import org.apache.spark.sql.types.{StructType, StructField, LongType}
    
    val schema = StructType(Seq(
      StructField("A", StructType(Seq(
        StructField("B", LongType, true), 
        StructField("D", LongType, true)
      )), true),
      StructField("C", LongType, true)))
    

    You can pass schema like this to DataFrameReader:

    val rddV1 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 } }"))
    val df1 = sqlContext.read.schema(schema).json(rddV1)
    
    val rddV2 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 }, \"C\": 2 }"))
    val df2 = sqlContext.read.schema(schema).json(rddV2)
    
    val rddV3 = sc.parallelize(Seq("{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }"))
    val df3 = sqlContext.read.schema(schema).json(rddV3)
    

    and you'll get a consistent structure independent of a variant:

    require(df1.schema == df2.schema && df2.schema == df3.schema)
    

    with missing columns automatically set to null:

    df1.printSchema
    // root
    //  |-- A: struct (nullable = true)
    //  |    |-- B: long (nullable = true)
    //  |    |-- D: long (nullable = true)
    //  |-- C: long (nullable = true)
    
    df1.show
    // +--------+----+
    // |       A|   C|
    // +--------+----+
    // |[1,null]|null|
    // +--------+----+
    
    df2.show
    // +--------+---+
    // |       A|  C|
    // +--------+---+
    // |[1,null]|  2|
    // +--------+---+
    
    df3.show
    // +-----+---+
    // |    A|  C|
    // +-----+---+
    // |[1,3]|  2|
    // +-----+---+
    

    Note:

    This solutions is data source dependent. It may or may not work with other sources, or even result in malformed records.

    0 讨论(0)
提交回复
热议问题