How to detect the right encoding for read.csv?

后端 未结 6 1815
遥遥无期
遥遥无期 2020-11-27 11:02

I have this file (http://b7hq6v.alterupload.com/en/) that I want to read in R with read.csv. But I am not able to detect the correct encoding. It seems to be a

6条回答
  •  挽巷
    挽巷 (楼主)
    2020-11-27 11:28

    First of all based on more general question on StackOverflow it is not possible to detect encoding of file in 100% certainty.

    I've struggle this many times and come to non-automatic solution:

    Use iconvlist to get all possible encodings:

    codepages <- setNames(iconvlist(), iconvlist())
    

    Then read data using each of them

    x <- lapply(codepages, function(enc) try(read.table("encoding.asc",
                       fileEncoding=enc,
                       nrows=3, header=TRUE, sep="\t"))) # you get lots of errors/warning here
    

    Important here is to know structure of file (separator, headers). Set encoding using fileEncoding argument. Read only few rows.
    Now you could lookup on results:

    unique(do.call(rbind, sapply(x, dim)))
    #        [,1] [,2]
    # 437       14    2
    # CP1200     3   29
    # CP12000    0    1
    

    Seems like correct one is that with 3 rows and 29 columns, so lets see them:

    maybe_ok <- sapply(x, function(x) isTRUE(all.equal(dim(x), c(3,29))))
    codepages[maybe_ok]
    #    CP1200    UCS-2LE     UTF-16   UTF-16LE      UTF16    UTF16LE 
    #  "CP1200"  "UCS-2LE"   "UTF-16" "UTF-16LE"    "UTF16"  "UTF16LE" 
    

    You could look on data too

    x[maybe_ok]
    

    For your file all this encodings returns identical data (partially because there is some redundancy as you see).

    If you don't know specific of your file you need to use readLines with some changes in workflow (e.g. you can't use fileEncoding, must use length instead of dim, do more magic to find correct ones).

提交回复
热议问题