export-to-csv

how to use comma in csv columns [duplicate]

怎甘沉沦 提交于 2019-12-05 12:40:54
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: Dealing with commas in a CSV file We are exporting a bulk data into a csv file for one of our projects. So in this case we have to export values like a,b,c,d which all have to be remain in one column. But the comma will separate them to different columns. Like if we export some values entered in textarea or editor which contains character like \r\n will be exported as separate rows in csv. How can i solve this

ElasticSearch dump full index to a csv file

冷暖自知 提交于 2019-12-05 11:58:19
Is it possible to easily export whole index (all stored fields) of an elasticsearch cluster in a .csv file(possibly out of the box)? The alternative I can think is query the whole index and then convert the result... but I really don't like the idea of parsing a monstrous json since it contains some millions of documents! Are there any other ways or ideas to achieve the export? Disclaimer: I'm the author of estab. estab exports elasticsearch fields as tab separated values. If you do not have too many fields, it's easy to explicitly export them all. Internally estab uses the scan and scroll API

Exporting a dojo datagrid to a CSV file

末鹿安然 提交于 2019-12-05 07:25:12
问题 I am looking to get a Javascript function which will export my datagrid (zero.grid.DataGrid) full of data into a CSV file or something similar which can be opened by a spreadsheet application. Is there any standard way of doing this out there.. 回答1: I had tough time with using Exporter plugin with EnhancedGrid using servlet as backend. Finally I made it work by using iFrame: <!DOCTYPE HTML> <html lang="en"> <head> <meta charset="utf-8"> <title>CISROMM - Master Milestone List Editor</title> <!

Writing csv file in asp.net

孤街浪徒 提交于 2019-12-05 06:32:36
I'm trying to export data to a csv file, as there are chinese characters in the data i had to use unicode.. but after adding the preamble for unicode, the commas are not recognized as delimiters and all data are now written to the first column. I'm not sure what is wrong. Below is my code which i wrote in a .ashx file. DataView priceQuery = (DataView)context.Session["priceQuery"]; String fundName = priceQuery.Table.Rows[0][0].ToString().Trim().Replace(' ', '_'); context.Response.Clear(); context.Response.ClearContent(); context.Response.ClearHeaders(); context.Response.ContentType = "text/csv"

How to save Amazon Redshift output to local CSV through PSQL on SQL WorkBench?

纵然是瞬间 提交于 2019-12-05 05:40:06
I am writing psql through Amazon Redshift and now I am trying to save the output as CSV through PSQL query, on SQL WorkBench The reason I am planning to do this through query instead of using select clause and then right click to save the output as csv, is because there are large amount of data, I found that if I generate the output into a temp table, it's much much faster than using select to display all the output. Therefore, I am thinking whether saving to local CSV can be faster too. I have tried the top solution here , however, it doesn't work on Amazon Redshift, When I am using Copy

Angular ui-grid external export buttons

旧时模样 提交于 2019-12-05 04:50:34
I am new working with the Angular UI-GRID and I need to create external buttons for the exporting features like PDF export and CSV Export similar to this image . Do you have any idea how can I do it ? Also I need a Print button but I don't see it in the documentation. Is there a Print behavior for this grid ? Thank you, Ernesto Taking a look at the ui-grid.exporter source code (this will specifically address pdf export, which starts at ~line 972, but you can apply it to the csv use case as well), you would want to create an external button in your html, then tie the uiGridExporterService 's

Python/Numpy - Save Array with Column AND Row Titles

≯℡__Kan透↙ 提交于 2019-12-05 03:58:00
I want to save a 2D array to a CSV file with row and column "header" information (like a table). I know that I could use the header argument to numpy.savetxt to save the column names, but is there any easy way to also include some other array (or list) as the first column of data (like row titles)? Below is an example of how I currently do it. Is there a better way to include those row titles, perhaps some trick with savetxt I'm unaware of? import csv import numpy as np data = np.arange(12).reshape(3,4) # Add a '' for the first column because the row titles go there... cols = ['', 'col1',

PHP fputcsv encoding

℡╲_俬逩灬. 提交于 2019-12-05 01:30:17
问题 I create csv file with fputcsv. I want csv file to be in windows1251 ecnding. But can't find the solution. How can I do that? Cheers 回答1: Default encoding for an excel file is machine specific ANSI, mainly windows1252 . But since you are creating that file and maybe inserting UTF-8 characters, that file is not going to be handled ok. You could use iconv() when creating the file. Eg: function encodeCSV(&$value, $key){ $value = iconv('UTF-8', 'Windows-1252', $value); } array_walk($values,

Dynamic creation of columns using csvHelper

三世轮回 提交于 2019-12-05 01:26:32
I have a worker with various fields that are fetched from server. I am using CSVHelper package to convert this class to an excel sheet. Worker has Fields like : class Worker { string name; string phone; string age; Dictionary<string,object> customerField; } I can map the name, phone, number like class WorkerMap : CsvClassMap<Worker> { public WorkerMap() { Map(m => m.name); Map(m => m.phone); Map(m => m.age); } } And I generate the map by : csv.Configuration.RegisterClassMap<WorkerMap>(); Write the list of workers by : csv.WriteRecords(workerList); How can I map the customerField dictionary to

Why write.csv and read.csv are not consistent? [closed]

人走茶凉 提交于 2019-12-05 01:15:26
The problem is simple, consider the following example: m <- head(iris) write.csv(m, file = 'm.csv') m1 <- read.csv('m.csv') The result of this is that m1 is different from the original object m in that it has a new first column named "X". If I really wanted to make them equal, I have to use additional arguments, like in these two examples: write.csv(m, file = 'm.csv', row.names = FALSE) # and then m1 <- read.csv('m.csv') or write.csv(m, file = 'm.csv') m1 <- read.csv('m.csv', row.names = 1) The question is, what is the reason of this difference? in particular, why if write.csv and read.csv are