问题
I struggle with encoding in csv exports. I'm from the Netherlands and we use quite some trema's (e.g. ë, ï) and accents (e.g. é, ó) etc. This causes troubles when exporting to csv and open file in excel.
On macOS Mojave.
I've tried multiple encoding functions like the following.
library(stringr)
library(readr)
test <- c("Argentinië", "België", "Haïti")
test %>%
stringi::stri_conv(., "UTF-8", "ISO-8859-1") %>%
write.csv2("~/Downloads/test.csv")
But still, this causes weird characters:
回答1:
How do I take care of people importing the right encoding in their Excel..?
Don’t convert to iso-8859-1 but export with readr::write_excel_csv2(). It writes the file as UTF-8, but with byte order mark (BOM), which Excel understands).
library(readr)
test <- c("Argentinië", "België", "Haïti")
I need to convert test to UTF-8, because I am on Windows.
test <- enc2utf8(test)
On MacOS test should be in UTF-8 already, as that is the native encoding.
Encoding(test)
#> [1] "UTF-8" "UTF-8" "UTF-8"
Save as an Excel csv with readr::write_excel_csv2()
write_excel_csv2(data.frame(x = test), "test.csv")
Alternatively save as xlsx with writexl::write_xlsx()
writexl::write_xlsx(data.frame(x = test), "test.xlsx")
回答2:
test <- c("Argentinië", "België", "Haïti")
con <- file('~/test.csv', encoding = "ISO-8859-1")
write.csv(x = test, file = con)
A call to csvlook test.csv -e "ISO-8859-1" in my bash
returns
| a | x |
| - | ---------- |
| 1 | Argentinië |
| 2 | België |
| 3 | Haïti |
回答3:
do the simple thing
Just open the file in note pad and save as UTF -8 in another name, now open the saved notepad file in excel and it will ask you import, do delimiter based on your report and use , also as delimiter for columns separation and finish import. you will get your clean file
来源:https://stackoverflow.com/questions/53883593/export-csv-with-iso-8859-1-encoding-instead-of-utf-8