It's always a good idea to run some benchmarks for your use case. I've had good results storing raw structs via numpy:
df.to_records().astype(mytype).tofile('mydata')
df = pd.DataFrame.from_records(np.fromfile('mydata', dtype=mytype))
It is quite fast and takes up less space on the disk. But: you'll need to keep track of the dtype to reload the data, it's not portable between architectures, and it doesn't support the advanced features of HDF5. (numpy has a more advanced binary format which is designed to overcome the first two limitations, but I haven't had much success getting it to work.)
Update: Thanks for pressing me for numbers. My benchmark indicates that indeed HDF5 wins, at least in my case. It's both faster and smaller on disk! Here's what I see with dataframe of about 280k rows, 7 float columns, and a string index:
In [15]: %timeit df.to_hdf('test_fixed.hdf', 'test', mode='w')
10 loops, best of 3: 172 ms per loop
In [17]: %timeit df.to_records().astype(mytype).tofile('raw_data')
1 loops, best of 3: 283 ms per loop
In [20]: %timeit pd.read_hdf('test_fixed.hdf', 'test')
10 loops, best of 3: 36.9 ms per loop
In [22]: %timeit pd.DataFrame.from_records(np.fromfile('raw_data', dtype=mytype))
10 loops, best of 3: 40.7 ms per loop
In [23]: ls -l raw_data test_fixed.hdf
-rw-r----- 1 altaurog altaurog 18167232 Apr 8 12:42 raw_data
-rw-r----- 1 altaurog altaurog 15537704 Apr 8 12:41 test_fixed.hdf