utf-8

Sniffing and displaying TCP packets in UTF-8

柔情痞子 提交于 2020-01-01 03:17:07
问题 I am trying to use tcpdump to display the content of tcp packets flowing on my network. I have something like: tcpdump -i wlan0 -l -A The -A option displays the content as ASCII text, but my text seems to be UTF-8. Is there a way to display UTF-8 properly using tcpdump? Do you know any other tools which could help? Many thanks 回答1: Make sure your terminal supports outputting UTF-8 and pipe the output to something which replaces non printable characters: tcpdump -lnpi lo tcp port 80 -s 16000

iconv unicode unknown input format

家住魔仙堡 提交于 2020-01-01 03:01:08
问题 I have a file which is described under Unix as: $file xxx.csv xxx.csv: UTF-8 Unicode text, with very long lines Viewing it in less / vi will render some special chars (ßİ...) unreadable (├╝); Windows will also not display it; importing it directly into a db will just change the special characters to some other special characters (+ä, +ñ, ...). I wanted to convert it now to a "default readable" encoding with iconv. When I try to convert it with iconv $iconv -f UTF-8 -t ISO-8859-1 xxx.csv >

What is the difference between Content-Type…charset=X and Content-Encoding=X?

南笙酒味 提交于 2020-01-01 01:33:21
问题 Is there any effective difference between Content-Encoding: UTF-8 Content-Type: text/html; charset=utf-8 ? 回答1: Optional parameter charset makes sense only for text-based content ( Content-Types like text/plain , text/html , and such). Not all messages are text. Content-Encoding means that the whole body has been encoded in some way (usually compressed). Typical values for this header are gzip and deflate . The recipient of this message should decode (e.g. ungzip) the body to get the original

how to read a file that can be saved as either ansi or unicode in python?

梦想的初衷 提交于 2020-01-01 01:20:06
问题 I have to write a script that support reading of a file which can be saved as either Unicode or Ansi (using MS's notepad). I don't have any indication of the encoding format in the file, how can I support both encoding formats? (kind of a generic way of reading files with out knowing the format in advanced). 回答1: MS Notepad gives the user a choice of 4 encodings, expressed in clumsy confusing terminology: "Unicode" is UTF-16, written little-endian. "Unicode big endian" is UTF-16, written big

Loading UTF-8 encoded dump into MySQL

℡╲_俬逩灬. 提交于 2019-12-31 22:25:22
问题 I've been pulling my hear out over this problem for a few hours yesterday: I've a database on MySQL 4.1.22 server with encoding set to "UTF-8 Unicode (utf8)" (as reported by phpMyAdmin). Tables in this database have default charset set to latin2 . But, the web application (CMS Made Simple written in PHP) using it displays pages in utf8 ... However screwed up this may be, it actually works. The web app displays characters correctly (mostly Czech and Polish are used). I run: "mysqldump -u xxx

PHP: Problems converting “’” character from ISO-8859-1 to UTF-8

ぃ、小莉子 提交于 2019-12-31 17:12:27
问题 I'm having some issues with using PHP to convert ISO-8859-1 database content to UTF-8. I am running the following code to test: // Connect to a latin1 charset database // and retrieve "Georgia O’Keeffe", which contains a "’" character $connection = mysql_connect('*****', '*****', '*****'); mysql_select_db('*****', $connection); mysql_set_charset('latin1', $connection); $result = mysql_query('SELECT notes FROM categories WHERE id = 16', $connection); $latin1Str = mysql_result($result, 0);

The £ sign is shown as a diamond with a question mark in the middle

蹲街弑〆低调 提交于 2019-12-31 07:43:48
问题 I'm creating a webapp which involves displaying financial data to the user. Being from the UK and using GBP £ for currency, this character is used a lot. However, every now and then, the £ is shown as a diamond with a question mark in the middle, and on the web page it throws an invalid charachter UTF-8 byte 1 of 1 byte string. Is there a UTF safe way to display the £ sign? Here is an example of what I am doing at the moment: "Rent Per Annum: £" + '${tenant.currentRent}' 回答1: The particular

PDO query returns lots of \uXXXX character codes which I can't convert to unicode characters

南楼画角 提交于 2019-12-31 07:42:31
问题 I have a MySQL database table in which I store the names of countries in different languages, and I can't get the data to display in unicode characters - I can only get \uXXXX codes displayed where the special characters should be. The query is used in an AJAX request with the results encoded as a JSON object. Here is the table (truncated): CREATE TABLE IF NOT EXISTS `tbl_countries` ( `ccode` varchar(2) character set utf8 collate utf8_unicode_ci NOT NULL default '', `country_en` varchar(100)

VBA Macro to search a text string and replace with hyperlink works only with English text, but not Arabic

五迷三道 提交于 2019-12-31 06:44:14
问题 I have Arabic texts in Microsoft Word that I need to introduce hpyerlinks to some of its words. The reply to this question here works for me, but only for English words. When I replace the word "google" with any Arabic string (Whether one word or multiple words), the macro does not work. I can display the Arabic characters correctly in VBA, using the answer to this question here, so there are no problems displaying the text in the macro. Can you please help me understanding what code

VBA Macro to search a text string and replace with hyperlink works only with English text, but not Arabic

ぃ、小莉子 提交于 2019-12-31 06:43:26
问题 I have Arabic texts in Microsoft Word that I need to introduce hpyerlinks to some of its words. The reply to this question here works for me, but only for English words. When I replace the word "google" with any Arabic string (Whether one word or multiple words), the macro does not work. I can display the Arabic characters correctly in VBA, using the answer to this question here, so there are no problems displaying the text in the macro. Can you please help me understanding what code