csv-import

Postgresql csv importation that skips rows

孤者浪人 提交于 2019-12-11 04:39:02
问题 I have a Postgresql script that automatically imports csv files into my database. The script can detect duplicate records and remove them, do a proper upsert but still cannot tackle everything. Basically the csv files are exported from other systems which append at the beginning and end of the file extra information e.g: Total Count: 2956 Avg Time: 13ms Column1, Column2, Column3 ... ... ... What I want to do is skip those initial rows or any rows at the bottom of the file. Is there any way I

SQL Server 2012 Import CSV to mapped Columns

蹲街弑〆低调 提交于 2019-12-10 16:38:39
问题 I'm currently trying to import about 10000 rows (from a CSV file) into an existing table. I only have 1 column which I'm trying to import, but in my table I have a another column called TypeId which I need to set to a static value i.e. 99E05902-1F68-4B1A-BC66-A143BFF19E37 . So I need something like INSERT INTO TABLE ([Name], [TypeId]) Values (@Name (CSV value), "99E05902-1F68-4B1A-BC66-A143BFF19E37") Any examples would be great. Thanks 回答1: As mentioned above import the data into a temporary

Preventing R from interpreting text as numeric

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-02 10:14:38
I am trying to import a CSV in R which has ZIP code information which R is interpreting as numeric when I need it to remain as character. data = read.csv("zipCodeInformation.csv", stringsAsFactor = FALSE) The data has the following format: Lower.Zip, Upper.Zip, Zone 004, 005, Zone.8 006, 007, Zone.45 009, , Zone.45 010, 089, Zone.8 100, 339, Zone.8 What happens right now is R interprets the first 2 columns as numeric and turns them into the following: Lower.Zip, Upper.Zip, Zone 4, 5, Zone.8 6, 7, Zone.45 9, , Zone.45 10, 89, Zone.8 100, 339, Zone.8 Use the colClasses argument to read.csv .

pandas read_csv and filter columns with usecols

耗尽温柔 提交于 2019-11-27 17:11:59
I have a csv file which isn't coming in correctly with pandas.read_csv when I filter the columns with usecols and use multiple indexes. import pandas as pd csv = r"""dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5""" f = open('foo.csv', 'w') f.write(csv) f.close() df1 = pd.read_csv('foo.csv', header=0, names=["dummy", "date", "loc", "x"], index_col=["date", "loc"], usecols=["dummy", "date", "loc", "x"], parse_dates=["date"]) print df1 # Ignore the dummy columns df2 = pd.read_csv('foo.csv', index_col=["date", "loc"], usecols

Commas within CSV Data

[亡魂溺海] 提交于 2019-11-27 14:01:47
I have a CSV file which I am directly importing to a SQL server table. In the CSV file each column is separated by a comma. But my problem is that I have a column "address", and the data in this column contains commas. So what is happening is that some of the data of the address column is going to the other columns will importing to SQL server. What should I do? If there is a comma in a column then that column should be surrounded by a single quote or double quote. Then if inside that column there is a single or double quote it should have an escape charter before it, usually a \ Example

pandas read_csv and filter columns with usecols

不羁的心 提交于 2019-11-26 22:31:12
问题 I have a csv file which isn't coming in correctly with pandas.read_csv when I filter the columns with usecols and use multiple indexes. import pandas as pd csv = r"""dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5""" f = open('foo.csv', 'w') f.write(csv) f.close() df1 = pd.read_csv('foo.csv', header=0, names=["dummy", "date", "loc", "x"], index_col=["date", "loc"], usecols=["dummy", "date", "loc", "x"], parse_dates=["date"

Commas within CSV Data

我怕爱的太早我们不能终老 提交于 2019-11-26 18:21:37
问题 I have a CSV file which I am directly importing to a SQL server table. In the CSV file each column is separated by a comma. But my problem is that I have a column "address", and the data in this column contains commas. So what is happening is that some of the data of the address column is going to the other columns will importing to SQL server. What should I do? 回答1: If there is a comma in a column then that column should be surrounded by a single quote or double quote. Then if inside that

LOAD DATA LOCAL INFILE gives the error The used command is not allowed with this MySQL version

时光毁灭记忆、已成空白 提交于 2019-11-26 09:40:03
问题 I have a PHP script that calls MySQL\'s LOAD DATA INFILE to load data from CSV files. However, on production server, I ended up with the following error: Access denied for user ... (using password: yes) As a quick workaround, I changed the command to LOAD DATA LOCAL INFILE which worked. However, the same command failed on client\'s server with this message: The used command is not allowed with this MySQL version I assume this has something to do with the server variable: local_infile = off as

Create Pandas DataFrame from a string

依然范特西╮ 提交于 2019-11-25 23:48:43
问题 In order to test some functionality I would like to create a DataFrame from a string. Let\'s say my test data looks like: TESTDATA=\"\"\"col1;col2;col3 1;4.4;99 2;4.5;200 3;4.7;65 4;3.2;140 \"\"\" What is the simplest way to read that data into a Pandas DataFrame ? 回答1: A simple way to do this is to use StringIO.StringIO (python2) or io.StringIO (python3) and pass that to the pandas.read_csv function. E.g: import sys if sys.version_info[0] < 3: from StringIO import StringIO else: from io