csv

how to automatically create table based on CSV into postgres using python

做~自己de王妃 提交于 2021-02-11 15:37:12
问题 I am a new Python programmer and trying to import a sample CSV file into my Postgres database using python script. I have CSV file with name abstable1 it has 3 headers: absid, name, number I have many such files in a folder I want to create a table into PostgreSQL with the same name as the CSV file for all. Here is the code which I tried to just create a table for one file to test: import psycopg2 import csv import os #filePath = 'c:\\Python27\\Scripts\\abstable1.csv' conn = psycopg2.connect(

how to automatically create table based on CSV into postgres using python

断了今生、忘了曾经 提交于 2021-02-11 15:35:17
问题 I am a new Python programmer and trying to import a sample CSV file into my Postgres database using python script. I have CSV file with name abstable1 it has 3 headers: absid, name, number I have many such files in a folder I want to create a table into PostgreSQL with the same name as the CSV file for all. Here is the code which I tried to just create a table for one file to test: import psycopg2 import csv import os #filePath = 'c:\\Python27\\Scripts\\abstable1.csv' conn = psycopg2.connect(

skipping lines while reading from csv file in java [duplicate]

a 夏天 提交于 2021-02-11 15:28:56
问题 This question already has answers here : BufferedReader is skipping every other line when reading my file in java (3 answers) Closed 1 year ago . private static List<Book> readDataFromCSV(String fileName) { List<Book> books = new ArrayList<>(); Path pathToFile = Paths.get(fileName); // create an instance of BufferedReader // using try with resource, Java 7 feature to close resources try (BufferedReader br = Files.newBufferedReader(pathToFile, StandardCharsets.US_ASCII)) { // read the first

How to get a CSV string from querying a relational DB?

感情迁移 提交于 2021-02-11 15:13:20
问题 I'm querying a relational Database and I need the result as a CSV string. I can't save it on the disk as is running in a serverless environment (I don't have access to disk). Any idea? 回答1: My solution was using PyGreSQL library and defining this function: import pg def get_csv_from_db(query, cols): """ Given the SQL @query and the expected @cols, a string formatted CSV (containing headers) is returned :param str query: :param list of str cols: :return str: """ connection = pg.DB( dbname=my

google docs spreadsheet export: How to remove apostrophe from times, dates [closed]

巧了我就是萌 提交于 2021-02-11 15:09:56
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . Improve this question I'm exporting a spreadsheet from Google Docs as CSV, and there are apostrophe's (') prepended to each date and time value. This is really annoying as OpenOffice doesn't seem to be able to find/replace these in the spreadsheet editor. I could solve the problem by

How to get a CSV string from querying a relational DB?

被刻印的时光 ゝ 提交于 2021-02-11 15:06:03
问题 I'm querying a relational Database and I need the result as a CSV string. I can't save it on the disk as is running in a serverless environment (I don't have access to disk). Any idea? 回答1: My solution was using PyGreSQL library and defining this function: import pg def get_csv_from_db(query, cols): """ Given the SQL @query and the expected @cols, a string formatted CSV (containing headers) is returned :param str query: :param list of str cols: :return str: """ connection = pg.DB( dbname=my

Clean wrong header inside Dataframe with Python/Pandas

懵懂的女人 提交于 2021-02-11 14:37:49
问题 I've got a corrupt data frame with random header duplicates inside the data frame. How to ignore or delete these rows while loading the data frame? Since this random header is in the data frame, pandas raise an error while loading. I would like to ignore this row while loading it with pandas. Or delete it somehow, before loading it with pandas. The file looks like this: col1, col2, col3 0, 1, 1 0, 0, 0 1, 1, 1 col1, col2, col3 <- this is the random copy of the header inside the dataframe 0, 1

Clean wrong header inside Dataframe with Python/Pandas

只谈情不闲聊 提交于 2021-02-11 14:35:12
问题 I've got a corrupt data frame with random header duplicates inside the data frame. How to ignore or delete these rows while loading the data frame? Since this random header is in the data frame, pandas raise an error while loading. I would like to ignore this row while loading it with pandas. Or delete it somehow, before loading it with pandas. The file looks like this: col1, col2, col3 0, 1, 1 0, 0, 0 1, 1, 1 col1, col2, col3 <- this is the random copy of the header inside the dataframe 0, 1

How to stream a large gzipped .tsv file from s3, process it, and write back to a new file on s3?

我们两清 提交于 2021-02-11 14:34:19
问题 I have a large file s3://my-bucket/in.tsv.gz that I would like to load and process, write back its processed version to an s3 output file s3://my-bucket/out.tsv.gz . How do I streamline the in.tsv.gz directly from s3 without loading all the file to memory (it cannot fit the memory) How do I write the processed gzipped stream directly to s3? In the following code, I show how I was thinking to load the input gzipped dataframe from s3, and how I would write the .tsv if it were located locally

How do I import CSV file into a MySQL table?

你离开我真会死。 提交于 2021-02-11 14:25:22
问题 I have an unnormalized events-diary CSV from a client that I'm trying to load into a MySQL table so that I can refactor into a sane format. I created a table called 'CSVImport' that has one field for every column of the CSV file. The CSV contains 99 columns , so this was a hard enough task in itself: CREATE TABLE 'CSVImport' (id INT); ALTER TABLE CSVImport ADD COLUMN Title VARCHAR(256); ALTER TABLE CSVImport ADD COLUMN Company VARCHAR(256); ALTER TABLE CSVImport ADD COLUMN NumTickets VARCHAR