I have a csv file with 3 columns, wherein each row of Column 3 has list of values in it. As you can see from the following table structure
Col1,Col2,Col3
1,a
If you have the option to write the file -
you can use pd.to_parquet
and pd.read_parquet
(instead of csv).
It will properly parse this column.
Adding a replace to Cunninghams answer:
df = pd.read_csv("in.csv",converters={"Col3": lambda x: x.strip("[]").replace("'","").split(", ")})
See also pandas - convert string into list of strings
@Padraic Cunningham's answer will not work if you have to parse lists of strings that do not have quotes. For example, literal_eval
will successfully parse "['a', 'b', 'c']"
, but not "[a, b, c]"
. To load strings like this, use the PyYAML library.
import io
import pandas as pd
data = '''
A,B,C
"[1, 2, 3]",True,"[a, b, c]"
"[4, 5, 6]",False,"[d, e, f]"
'''
df = pd.read_csv(io.StringIO(data), sep=',')
df
A B C
0 [1, 2, 3] True [a, b, c]
1 [4, 5, 6] False [d, e, f]
df['C'].tolist()
# ['[a, b, c]', '[d, e, f]']
import yaml
df[['A', 'C']] = df[['A', 'C']].applymap(yaml.safe_load)
df['C'].tolist()
# [['a', 'b', 'c'], ['d', 'e', 'f']]
yaml
can be installed using pip install pyyaml
.
You could use the ast lib:
from ast import literal_eval
df.Col3 = df.Col3.apply(literal_eval)
print(df.Col3[0][0])
Proj1
You can also do it when you create the dataframe from the csv, using converters
:
df = pd.read_csv("in.csv",converters={"Col3": literal_eval})
If you are sure the format is he same for all strings, stripping and splitting will be a lot faster:
df = pd.read_csv("in.csv",converters={"Col3": lambda x: x.strip("[]").split(", ")})
But you will end up with the strings wrapped in quotes
I have a different approach for this, which can be used for string representations of other data types, besides just lists.
You can use the json library and apply json.loads() to the desired column. e.g
import json
df.my_column = df.my_column.apply(json.loads)
For this to work, however, your input strings must be enclosed in double quotations.