问题
Spark Dataframe Schema:
StructType(
[StructField("a", StringType(), False),
StructField("b", StringType(), True),
StructField("c" , BinaryType(), False),
StructField("d", ArrayType(StringType(), False), True),
StructField("e", TimestampType(), True)
])
When I write the data frame to parquet and load it into BigQuery, it interprets the schema differently. It is a simple load from JSON and write to parquet using spark dataframe.
BigQuery Schema:
[
{
"type": "STRING",
"name": "a",
"mode": "REQUIRED"
},
{
"type": "STRING",
"name": "b",
"mode": "NULLABLE"
},
{
"type": "BYTES",
"name": "c",
"mode": "REQUIRED"
},
{
"fields": [
{
"fields": [
{
"type": "STRING",
"name": "element",
"mode": "NULLABLE"
}
],
"type": "RECORD",
"name": "list",
"mode": "REPEATED"
}
],
"type": "RECORD",
"name": "d",
"mode": "NULLABLE"
},
{
"type": "TIMESTAMP",
"name": "e",
"mode": "NULLABLE"
}
]
Is this something to do with the way spark writes or they way BigQuery reads parquet. Any idea how I can fix this?
来源:https://stackoverflow.com/questions/53674838/spark-writing-parquet-arraystring-converts-to-a-different-datatype-when-loadin