csv

d3 accessing nested data in grouped bar chart

喜夏-厌秋 提交于 2021-02-06 20:51:11
问题 I'm building a grouped bar chart by nesting a .csv file. The chart will also be viewable as a line chart, so I want a nesting structure that suits the line object. My original .csv looks like this: Month,Actual,Forecast,Budget Jul-14,200000,-,74073.86651 Aug-14,198426.57,-,155530.2499 Sep-14,290681.62,-,220881.4631 Oct-14,362974.9,-,314506.6437 Nov-14,397662.09,-,382407.67 Dec-14,512434.27,-,442192.1932 Jan-15,511470.25,511470.25,495847.6137 Feb-15,-,536472.5467,520849.9105 Mar-15,-,612579

d3 accessing nested data in grouped bar chart

◇◆丶佛笑我妖孽 提交于 2021-02-06 20:48:36
问题 I'm building a grouped bar chart by nesting a .csv file. The chart will also be viewable as a line chart, so I want a nesting structure that suits the line object. My original .csv looks like this: Month,Actual,Forecast,Budget Jul-14,200000,-,74073.86651 Aug-14,198426.57,-,155530.2499 Sep-14,290681.62,-,220881.4631 Oct-14,362974.9,-,314506.6437 Nov-14,397662.09,-,382407.67 Dec-14,512434.27,-,442192.1932 Jan-15,511470.25,511470.25,495847.6137 Feb-15,-,536472.5467,520849.9105 Mar-15,-,612579

d3 accessing nested data in grouped bar chart

喜夏-厌秋 提交于 2021-02-06 20:48:26
问题 I'm building a grouped bar chart by nesting a .csv file. The chart will also be viewable as a line chart, so I want a nesting structure that suits the line object. My original .csv looks like this: Month,Actual,Forecast,Budget Jul-14,200000,-,74073.86651 Aug-14,198426.57,-,155530.2499 Sep-14,290681.62,-,220881.4631 Oct-14,362974.9,-,314506.6437 Nov-14,397662.09,-,382407.67 Dec-14,512434.27,-,442192.1932 Jan-15,511470.25,511470.25,495847.6137 Feb-15,-,536472.5467,520849.9105 Mar-15,-,612579

Scala: Iterate over CSV files in a functional way?

▼魔方 西西 提交于 2021-02-06 05:32:16
问题 I have CSV files with comments that give column names, where the columns change throughout the file: #c1,c2,c3 a,b,c d,e,f #c4,c5 g,h i,j I want to provide a way to iterate over (only) the data rows of the file as Maps of column names to values (all Strings). So the above would become: Map(c1 -> a, c2 -> b, c3 -> c) Map(c1 -> d, c2 -> e, c3 -> f) Map(c4 -> g, c5 -> h) Map(c4 -> i, c5 -> j) The files are very large, so reading everything into memory is not an option. Right now I have an

SSIS: Code page goes back to 65001

房东的猫 提交于 2021-02-05 20:28:37
问题 In an SSIS package that I'm writing, I have a CSV file as a source. On the Connection Manager General page, it has 65001 as the Code page (I was testing something). Unicode is not checked. The columns map to a SQL Server destination table with varchar (among others) columns. There's an error at the destination: The column "columnname" cannot be processed because more than one code page (65001 and 1252) are specified for it. My SQL columns have to be varchar , not nvarchar due to other

SSIS: Code page goes back to 65001

ⅰ亾dé卋堺 提交于 2021-02-05 20:28:01
问题 In an SSIS package that I'm writing, I have a CSV file as a source. On the Connection Manager General page, it has 65001 as the Code page (I was testing something). Unicode is not checked. The columns map to a SQL Server destination table with varchar (among others) columns. There's an error at the destination: The column "columnname" cannot be processed because more than one code page (65001 and 1252) are specified for it. My SQL columns have to be varchar , not nvarchar due to other

SSIS: Code page goes back to 65001

亡梦爱人 提交于 2021-02-05 20:28:00
问题 In an SSIS package that I'm writing, I have a CSV file as a source. On the Connection Manager General page, it has 65001 as the Code page (I was testing something). Unicode is not checked. The columns map to a SQL Server destination table with varchar (among others) columns. There's an error at the destination: The column "columnname" cannot be processed because more than one code page (65001 and 1252) are specified for it. My SQL columns have to be varchar , not nvarchar due to other

Why does TRANSACTION / COMMIT improve performance so much with PHP/MySQL (InnoDB)?

别来无恙 提交于 2021-02-05 17:59:03
问题 I've been working with importing large CSV files of data; usually less than 100,000 records. I'm working with PHP and MySQL (InnoDB tables). I needed to use PHP to transform some fields and do some text processing prior to the MySQL INSERT s (part of process_note_data() in code below). MySQL's LOAD DATA was not feasible, so please do not suggest it. I recently tried to improve the speed of this process by using MySQL transactions using START TRANSACTION and COMMIT . The performance increase

Why does TRANSACTION / COMMIT improve performance so much with PHP/MySQL (InnoDB)?

不问归期 提交于 2021-02-05 17:55:34
问题 I've been working with importing large CSV files of data; usually less than 100,000 records. I'm working with PHP and MySQL (InnoDB tables). I needed to use PHP to transform some fields and do some text processing prior to the MySQL INSERT s (part of process_note_data() in code below). MySQL's LOAD DATA was not feasible, so please do not suggest it. I recently tried to improve the speed of this process by using MySQL transactions using START TRANSACTION and COMMIT . The performance increase

How do I deal with commas when writing Objects to CSV in Swift?

对着背影说爱祢 提交于 2021-02-05 12:32:04
问题 There seem to be other answers the this on stack overflow but nothing that is specific to swift. I am generating a CSV from an Site Object containing 3 properties Struct SiteDetails { var siteName:String? var siteType: String? var siteUrl: String? } The problem is that siteName may contain a comma so its making it really hard to convert back from CSV into a object when I read the CSV file back as some lines have 4 or more CSV elements. Here is the code I am using to export to CSV: func