I have a large csv file (~10GB), with around 4000 columns. I know that most of data i will expect is int8, so i set:
pandas.read_csv(\'file.dat\', sep=\',\',
If you are certain of the number you could recreate the dictionary like this:
dtype = dict(zip(range(4000),['int8' for _ in range(3999)] + ['int32']))
Considering that this works:
import pandas as pd
import numpy as np
data = '''\
1,2,3
4,5,6'''
fileobj = pd.compat.StringIO(data)
df = pd.read_csv(fileobj, dtype={0:'int8',1:'int8',2:'int32'}, header=None)
print(df.dtypes)
Returns:
0 int8
1 int8
2 int32
dtype: object
From the docs:
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use str or object to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.