I have a CSV file with the header as Key and the data as the Value. My goal is to convert the CSV file into Json to upload into a database and output the data I uploaded. I
You can try to use .to_dict
if you load your data in a dataframe.
df = pd.read_csv('so-emissions-by-world-region-in-million-tonnes.csv')
df.T.to_dict().values()
.to_dict()
turns your dataframe in a map by columns (for each column you have index->value). By transposing and using .to_dict
, this is a map by rows (for each index you have a map column->value). You don't need the keys, so just take .values()
Be careful, this is a dict_values
object if you are using python 3.5, so you may want to use list()
before converting to json.
By the way, you can also use dict(zip(columns, values))
to get a map column->value for each row, which is faster. In that case you don't need pandas at all.
edit: if the csv has no header, you need to pass it in the pd.read_csv()
with keyword names=
You can use the csv.DictReader
to read your CSV and then serialise its output with json.dumps
.
import csv
import json
data = []
with open('file.csv') as f:
for row in csv.DictReader(f):
data.append(row)
json_data = json.dumps(data)
You are currently printing the result which is the dictionary itself, if you want to get the output in a nice format as shown in the question, you need to go through the dictionary to print out each key and its values
for key in keys: #looking through each key
print (key)
for i in results: #going through the results and printing the value of the index with the current key
print (results[i][key])
This should give the expected output in the console as mentioned in question