How to add a Map column to Spark dataset?

做~自己de王妃 提交于 2021-02-08 09:15:43

问题


I have a Java Map variable, say Map<String, String> singleColMap. I want to add this Map variable to a dataset as a new column value in Spark 2.2 (Java 1.8).

I tried the below code but it is not working:

ds.withColumn("cMap", lit(singleColMap).cast(MapType(StringType, StringType)))

Can some one help on this?


回答1:


You can use typedLit that was introducted in Spark 2.2.0, from the documentation:

The difference between this function and lit is that this function can handle parameterized scala types e.g.: List, Seq and Map.

So in this case, the following should be enough

ds.withColumn("cMap", typedLit(singleColMap))



回答2:


This can easily be solved in Scala with typedLit, but I couldn't find a way to make that method to work in Java, because it requires a TypeTag which I don't think it's even possible to create in Java.

However, I managed to mostly emulate in Java what typedLit does, bar the type inference part, so I need to set the Spark type explicitly:

public static Column typedMap(Map<String, String> map) {
    return new Column(Literal.create(JavaConverters.mapAsScalaMapConverter(map).asScala(), createMapType(StringType, StringType)));
}

Then it can be used like this:

ds.withColumn("cMap", typedMap(singleColMap))


来源:https://stackoverflow.com/questions/52417532/how-to-add-a-map-column-to-spark-dataset

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!