Serialize table to nested JSON using Apache Spark

白昼怎懂夜的黑 提交于 2021-02-08 09:47:09

问题


I have a set of records like the following sample

|ACCOUNTNO|VEHICLENUMBER|CUSTOMERID|
+---------+-------------+----------+
| 10003014|    MH43AJ411|  20000000|
| 10003014|    MH43AJ411|  20000001|
| 10003015|   MH12GZ3392|  20000002|

I want to parse into JSON and it should be look like this:

{
    "ACCOUNTNO":10003014,
    "VEHICLE": [
        { "VEHICLENUMBER":"MH43AJ411", "CUSTOMERID":20000000},
        { "VEHICLENUMBER":"MH43AJ411", "CUSTOMERID":20000001}
    ],
    "ACCOUNTNO":10003015,
    "VEHICLE": [
        { "VEHICLENUMBER":"MH12GZ3392", "CUSTOMERID":20000002}
    ]
}

I have written the program but failed to achieve the output.

package com.report.pack1.spark

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql._


object sqltojson {

  def main(args:Array[String]) {
    System.setProperty("hadoop.home.dir", "C:/winutil/")
    val conf = new SparkConf().setAppName("SQLtoJSON").setMaster("local[*]")
    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)
    import sqlContext.implicits._      
    val jdbcSqlConnStr = "jdbc:sqlserver://192.168.70.88;databaseName=ISSUER;user=bhaskar;password=welcome123;"      
    val jdbcDbTable = "[HISTORY].[TP_CUSTOMER_PREPAIDACCOUNTS]"
    val jdbcDF = sqlContext.read.format("jdbc").options(Map("url" -> jdbcSqlConnStr,"dbtable" -> jdbcDbTable)).load()
    jdbcDF.registerTempTable("tp_customer_account")
    val res01 = sqlContext.sql("SELECT ACCOUNTNO, VEHICLENUMBER, CUSTOMERID FROM tp_customer_account GROUP BY ACCOUNTNO, VEHICLENUMBER, CUSTOMERID ORDER BY ACCOUNTNO ")
    res01.coalesce(1).write.json("D:/res01.json")      
  }
}

How can I serialize in the given format? Thanks in advance!


回答1:


You can use struct and groupBy to get your desired result. Below is the code for same. I have commented the code whenever required.

val df = Seq((10003014,"MH43AJ411",20000000),
  (10003014,"MH43AJ411",20000001),
  (10003015,"MH12GZ3392",20000002)
).toDF("ACCOUNTNO","VEHICLENUMBER","CUSTOMERID")

df.show
//output
//+---------+-------------+----------+
//|ACCOUNTNO|VEHICLENUMBER|CUSTOMERID|
//+---------+-------------+----------+
//| 10003014|    MH43AJ411|  20000000|
//| 10003014|    MH43AJ411|  20000001|
//| 10003015|   MH12GZ3392|  20000002|
//+---------+-------------+----------+

//create a struct column then group by ACCOUNTNO column and finally convert DF to JSON
df.withColumn("VEHICLE",struct("VEHICLENUMBER","CUSTOMERID")).
  select("VEHICLE","ACCOUNTNO"). //only select reqired columns
  groupBy("ACCOUNTNO"). 
  agg(collect_list("VEHICLE").as("VEHICLE")). //for the same group create a list of vehicles
  toJSON. //convert to json
  show(false)

//output
//+------------------------------------------------------------------------------------------------------------------------------------------+
//|value                                                                                                                                     |
//+------------------------------------------------------------------------------------------------------------------------------------------+
//|{"ACCOUNTNO":10003014,"VEHICLE":[{"VEHICLENUMBER":"MH43AJ411","CUSTOMERID":20000000},{"VEHICLENUMBER":"MH43AJ411","CUSTOMERID":20000001}]}|
//|{"ACCOUNTNO":10003015,"VEHICLE":[{"VEHICLENUMBER":"MH12GZ3392","CUSTOMERID":20000002}]}                                                   |
//+------------------------------------------------------------------------------------------------------------------------------------------+

You can also write this dataframe to a file using same statement as you mentioned in question.



来源:https://stackoverflow.com/questions/51381068/serialize-table-to-nested-json-using-apache-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!