Specifying col type in Sparklyr (spark_read_csv)

放肆的年华 提交于 2019-12-07 08:30:54

问题


I am reading in a csv into spark using SpraklyR

schema <- structType(structField("TransTime", "array<timestamp>", TRUE),
                 structField("TransDay", "Date", TRUE))

 spark_read_csv(sc, filename, "path", infer_schema = FALSE, schema = schema)

But get:

Error: could not find function "structType"

How do I specify colunm types using spark_read_csv?

Thanks in advance.


回答1:


The structType function is from Scala's SparkAPI, in Sparklyr to specify the datatype you must pass it in the "column" argument as a list, suppose that we have the following CSV(data.csv):

name,birthdate,age,height
jader,1994-10-31,22,1.79
maria,1900-03-12,117,1.32

The function to read the corresponding data is:

mycsv <- spark_read_csv(sc, "mydate", 
                          path =  "data.csv", 
                          memory = TRUE,
                          infer_schema = FALSE, #attention to this
                          columns = list(
                            name = "character",
                            birthdate = "date", #or character because needs date functions
                            age = "integer",
                            height = "double"))
# integer = "INTEGER"
# double = "REAL"
# character = "STRING"
# logical = "INTEGER"
# list = "BLOB"
# date = character = "STRING" # not sure

For manipulating datetype you must use the hive date functions, not R functions.

mycsv %>% mutate(birthyear = year(birthdate))

Reference: https://spark.rstudio.com/articles/guides-dplyr.html#hive-functions




回答2:


we have an example of how to do that in one of our articles in the official sparklyr site, here is the link: http://spark.rstudio.com/example-s3.html#data_import



来源:https://stackoverflow.com/questions/43003185/specifying-col-type-in-sparklyr-spark-read-csv

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!