csv

pandas read_csv remove blank rows

孤者浪人 提交于 2020-07-09 07:36:11
问题 I am reading in a CSV file as a DataFrame while defining each column's data type. This code gives an error if the CSV file has a blank row in it. How do I read the CSV without blank rows? dtype = {'material_id': object, 'location_id' : object, 'time_period_id' : int, 'demand' : int, 'sales_branch' : object, 'demand_type' : object } df = pd.read_csv('./demand.csv', dtype = dtype) I thought of one workaround of doing something like this but not sure if this is the efficient way: df=pd.read_csv(

Google Maps infowindow content disappears on reload. What could be causing this?

拜拜、爱过 提交于 2020-07-09 05:48:06
问题 I have Javascript that reads in data from a csv to display as infowindow content on Google Maps: // Create a map of maximum predicted value and time of that prediction function maxValuesMap(data,maxmap) { var dataStr = new String(data), stations = $.csv.toObjects(dataStr), iWindows = []; for (var i=0;i<stations.length;i++) { var st = stations[i]['Station'], ts = stations[i]['Time'], max = stations[i]['Max'], lat = stations[i]['Lat'], lon = stations[i]['Lon']; var windowLatLng = new google

Using awk or perl to extract specific columns from CSV (parsing)

半城伤御伤魂 提交于 2020-07-08 06:04:09
问题 Background - I want to extract specific columns from a csv file. The csv file is comma delimited, uses double quotes as the text-qualifier (optional, but when a field contains special characters, the qualifier will be there - see example), and uses backslashes as the escape character. It is also possible for some fields to be blank. Example Input and Desired Output - For example, I only want columns 1, 3, and 4 to be in the output file. The final extract of the columns from the csv file

Using awk or perl to extract specific columns from CSV (parsing)

无人久伴 提交于 2020-07-08 06:02:40
问题 Background - I want to extract specific columns from a csv file. The csv file is comma delimited, uses double quotes as the text-qualifier (optional, but when a field contains special characters, the qualifier will be there - see example), and uses backslashes as the escape character. It is also possible for some fields to be blank. Example Input and Desired Output - For example, I only want columns 1, 3, and 4 to be in the output file. The final extract of the columns from the csv file

Convert xls File to csv, but extra rows added?

时间秒杀一切 提交于 2020-07-07 14:17:36
问题 So, I am trying to convert some xls files to a csv, and everything works great, except for one part. The SaveAs function in the Excel interop seems to export all of the rows (including blank ones). I can see these rows when I look at the file using Notepad. (All of the rows I expect, 15 rows with two single quotes, then the rest are just blank). I then have a stored procedure that takes this csv and imports to the desired table (this works on spreadsheets that have been manually converted to

Convert xls File to csv, but extra rows added?

我的未来我决定 提交于 2020-07-07 14:16:47
问题 So, I am trying to convert some xls files to a csv, and everything works great, except for one part. The SaveAs function in the Excel interop seems to export all of the rows (including blank ones). I can see these rows when I look at the file using Notepad. (All of the rows I expect, 15 rows with two single quotes, then the rest are just blank). I then have a stored procedure that takes this csv and imports to the desired table (this works on spreadsheets that have been manually converted to

Convert xls File to csv, but extra rows added?

安稳与你 提交于 2020-07-07 14:16:26
问题 So, I am trying to convert some xls files to a csv, and everything works great, except for one part. The SaveAs function in the Excel interop seems to export all of the rows (including blank ones). I can see these rows when I look at the file using Notepad. (All of the rows I expect, 15 rows with two single quotes, then the rest are just blank). I then have a stored procedure that takes this csv and imports to the desired table (this works on spreadsheets that have been manually converted to

Replace cell values in each row of pandas column using for loop

风流意气都作罢 提交于 2020-07-06 12:33:29
问题 Please, help me understand my error. I'm trying to change one column in my .csv file. I have .csv file as following: sku,name,code k1,aaa,886 k2,bbb,898 k3,ccc,342 k4,ddd,503 k5,eee,401 I want to replace "k" symbol with the "_" symbol in the "sku" column. I wrote the code: import sys import pandas as pd import numpy as np import datetime df = pd.read_csv('cat0.csv') for r in df['sku']: r1 = r.replace('k', '_') df['sku'] = r1 print (df) But the code inserts the last value in every row of the

How to load the csv file into the Spark DataFrame with Array[Int]

点点圈 提交于 2020-07-05 10:27:04
问题 Every row in my csv file is structured like this: u001, 2013-11, 0, 1, 2, ... , 99 in which u001 and 2013-11 are UID and date, the number from 0 to 99 are the data value. I want to load this csv file into the Spark DataFrame in this structure: +-------+-------------+-----------------+ | uid| date| dataVector| +-------+-------------+-----------------+ | u001| 2013-11| [0,1,...,98,99]| | u002| 2013-11| [1,2,...,99,100]| +-------+-------------+-----------------+ root |-- uid: string (nullable =

How to load the csv file into the Spark DataFrame with Array[Int]

六月ゝ 毕业季﹏ 提交于 2020-07-05 10:25:47
问题 Every row in my csv file is structured like this: u001, 2013-11, 0, 1, 2, ... , 99 in which u001 and 2013-11 are UID and date, the number from 0 to 99 are the data value. I want to load this csv file into the Spark DataFrame in this structure: +-------+-------------+-----------------+ | uid| date| dataVector| +-------+-------------+-----------------+ | u001| 2013-11| [0,1,...,98,99]| | u002| 2013-11| [1,2,...,99,100]| +-------+-------------+-----------------+ root |-- uid: string (nullable =