I think I misunderstand the intention of read_csv. If I have a file \'j\' like
# notes
a,b,c
# more notes
1,2,3
How can I pandas.read_csv t
One workaround is to specify skiprows to ignore the first few entries:
In [11]: s = '# notes\na,b,c\n# more notes\n1,2,3'
In [12]: pd.read_csv(StringIO(s), sep=',', comment='#', skiprows=1)
Out[12]:
a b c
0 NaN NaN NaN
1 1 2 3
Otherwise read_csv gets a little confused:
In [13]: pd.read_csv(StringIO(s), sep=',', comment='#')
Out[13]:
Unnamed: 0
a b c
NaN NaN NaN
1 2 3
This seems to be the case in 0.12.0, I've filed a bug report.
As Viktor points out you can use dropna to remove the NaN after the fact... (there is a recent open issue to have commented lines be ignored completely):
In [14]: pd.read_csv(StringIO(s2), comment='#', sep=',').dropna(how='all')
Out[14]:
a b c
1 1 2 3
Note: the default index will "give away" the fact there was missing data.