Possible to handle multi character delimiter in spark [duplicate]

孤街醉人 提交于 2020-01-30 06:27:25

问题


I have [~] as my delimiter for some csv files I am reading.

1[~]a[~]b[~]dd[~][~]ww[~][~]4[~]4[~][~][~][~][~]

I have tried this

val rddFile = sc.textFile("file.csv")
val rddTransformed = rddFile.map(eachLine=>eachLine.split("[~]"))
val df = rddTransformed.toDF()
display(df)

However this issue with this, is that it comes as a single value array with [ and ] in each field. So the array would be

["1[","]a[","]b[",...]

I can't use

val df = spark.read.option("sep", "[~]").csv("file.csv")

Because multi-character seperator is not supported. What other approach can I take?

1[~]a[~]b[~]dd[~][~]ww[~][~]4[~]4[~][~][~][~][~]
2[~]a[~]b[~]dd[~][~]ww[~][~]4[~]4[~][~][~][~][~]
3[~]a[~]b[~]dd[~][~]ww[~][~]4[~]4[~][~][~][~][~]

Edit - this is not a duplicate, the duplicated thread is about multi delimiters, this is multi-character single delimiter


回答1:


val df = spark.read.format("csv").load("inputpath")
df.rdd.map(i => i.mkString.split("\\[\\~\\]")).toDF().show(false)

try below

for your another requirement

val df1 = df.rdd.map(i => i.mkString.split("\\[\\~\\]").mkString(",")).toDF()
val iterationColumnLength = df1.rdd.first.mkString(",").split(",").length
df1.withColumn("value",split(col("value"),",")).select((0 until iterationColumnLength).map(i => col("value").getItem(i).as("col_" + i)): _*).show



来源:https://stackoverflow.com/questions/52083828/possible-to-handle-multi-character-delimiter-in-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!