latin1

Special apostrophe breaks JSON

自古美人都是妖i 提交于 2019-12-24 06:57:27
问题 [7671] => Sleaford Carre’s is an element in $result $result= json_encode($result); echo $result; //outputs "7671":null, Please note that this is not a normal apostrophe (single quote) or a back tick. I cant even find it on my keyboard. Data comes from a Latin-1 table. I have also noticed that using htmlentities on building the array will dissapear the string from the array. What am I to do?? 回答1: As no-one has actually written an answer, read the comments:) as the others have said, use utf8

Recoding data.fame object from latin1 to utf-8

纵然是瞬间 提交于 2019-12-24 00:47:13
问题 I work with windows 7 (my system: "LC_COLLATE=French_France.1252) with data with accents. My data are coded in ANSI which allows me to visualize them correctly in the tabs of Rstudio. My problem: When I want to a create GoogleVis page (encoding utf-8), the accented characters are not displayed correctly. What I expected: I am looking to convert my latin1 Data.frames in utf-8 with R just before creating googleVis pages. I have no ideas. Stringi package seems only to work with raw data. fr <-

Converting a string from utf8 to latin1 in NodeJS

流过昼夜 提交于 2019-12-23 15:23:46
问题 I'm using a Latin1 encoded DB and can't change it to UTF-8 meaning that I run into issues with certain application data. I'm using Tesseract to OCR a document (tesseract encodes in UTF-8) and tried to use iconv-lite; however, it creates a buffer and to convert that buffer into a string. But again, buffer to string conversion does not allow "latin1" encoding. I've read a bunch of questions/answers; however, all I get is setting client encoding and stuff like that. Any ideas? 回答1: You can

PostgreSQL ignores dashes when ordering

你离开我真会死。 提交于 2019-12-22 09:09:59
问题 I have a PostgreSQL 8.4 database that is created with the da_DK.utf8 locale. dbname=> show lc_collate; lc_collate ------------ da_DK.utf8 (1 row) When I select something from a table where I order on a character varying column I get a strange behaviour IMO. When ordering the result PostgreSQL ignores dashes that prefixes the value, e.g.: select name from mytable order by name asc; May return something like name ---------------- Ad... Ae... Ag... - Ak.... At.... The dash prefix seems to be

Special characters get lost in MySQL export/import

孤者浪人 提交于 2019-12-22 08:26:20
问题 I'm trying to move a MySQL 3.23.58 database to a different server running 5.5.19. The old one has latin1 encoding specified, and as far as I can tell the underlying data is indeed honestly latin1. I have tried many things, chiefly: exporting from terminal with mysqldump and the latin1 encoding flag. editing in vim to change "TYPE=InnoDB" to "ENGINE=InnoDB" for MySQL 5 compatibility. importing to the new server from terminal. Browsing the old server (in Sequel Pro for Mac, or MySQL Query

When to use utf-8 and when to use latin1 in MySQL?

可紊 提交于 2019-12-21 03:57:31
问题 I know that MySQL has default of latin1 encoding and apparently it takes 1 byte to store a character in latin1 and 3 bytes to store a character in utf-8 - is that correct? I am working on a site that I hope will be used globally. Do I absolutely need to have utf-8 ? Or will I be able to get away with using latin1? Also, I tried to change some tables from latin1 to utf8 but I got this error: Speficief key was too long; max key length is 1000 bytes Does anyone know the solution to this? And

Converting mysql tables from latin1 to utf8

心已入冬 提交于 2019-12-17 22:19:39
问题 I'm trying to convert some mysql tables from latin1 to utf8. I'm using the following command, which seems to mostly work. ALTER TABLE tablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; However, on one table I get an error about a duplicate key entry. This is caused by a unique index on a "name" field. It seems when converting to utf8, any "special" characters are indexed as their straight english equivalent. For example, there is already a record with a name field value of "Dru"

'latin-1' codec can't encode character u'\u2014' in position 23: ordinal not in range(256)

这一生的挚爱 提交于 2019-12-13 14:20:44
问题 I am loading data into a pandas dataframe from an excel workbook and am attempting to push it to a database when I get the above error. I thought at first the collation of the database was at issue which I changed to utf8_bin Next I checked the database engine create statement on my end which I added a parameter for the encoding too. engine = create_engine('mysql+pymysql://root@localhost/test', encoding="utf-8") But neither of these things work I am still getting the error from the line: df

replace garbage characters within mysql

╄→尐↘猪︶ㄣ 提交于 2019-12-13 01:26:20
问题 My db is in latin1 and is full of â" or '��"' (depending on whether my terminal is set to latin1 or unicode, respectively). From context, I think they should be emdashes. They appear to be causing nasty bugs when rendered (or not rendered) in IE. I'd like to find and replace them. The problem is that neither the â nor � character match with replace . Running the query: update TABLE set COLUMN = replace(COLUMN,'��"','---'); Executes without error but doesn't do anything (0 rows changed). It's

How to retrieve utf-8 data with php and show the correct encoding in an excelsheet db dump?

那年仲夏 提交于 2019-12-12 11:34:57
问题 Hi I am saving mostly english and german characters into a mysql database which currently is set to utf-8 charset. I am assuming that I should use latin1 charset for this type of data, is that correct? If so how can I change the charset to correct the german chars which are now saved in utf-8? UPDATE Maybe then it is a retrival problem ... When I export data from the db via php of course I get utf-8 back, can I do the retrival to give me latin1? UPDATE 1 Ok I am building a website, the html