hdf5

Exception 'HDFStore requires PyTables ' when using HDF5 file in iPython

岁酱吖の 提交于 2019-12-22 11:02:12
问题 I am very new to Python and am trying to create a table in pandas with HDFStore as follows store = HDFStore('store.h5') I get the exception : Exception Traceback (most recent call last) C:\Python27\<ipython-input-11-de3060b689e6> in <module>() ----> 1 store = HDFStore('store.h5') C:\Python27\lib\site-packages\pandas-0.10.1-py2.7-win32.egg\pandas\io\pytables.pyc in __init__(self, path, mode, complevel, complib, fletcher32) 196 import tables as _ 197 except ImportError: # pragma: no cover -->

Update pandas DataFrame in stored in a Pytable with another pandas DataFrame

流过昼夜 提交于 2019-12-22 06:37:40
问题 I am trying to create a function that updates a pandas DataFrame stored that I have stored in a PyTable with new data from a pandas DataFrame. I want to check if some data is missing in the PyTable for specific DatetimeIndexes (value is NaN or a new Timestamp is available), replace this with new values from a given pandas DataFrame and append this to the Pytable. Basically, just update a Pytable. I can get the combined DataFrame using the combine_first method in Pandas. Below the Pytable is

add hdf5 libs (java & c++) to public maven repository?

 ̄綄美尐妖づ 提交于 2019-12-22 05:17:24
问题 Is there a public maven repository where I or somebody can put the HDF java and HDF C++ libraries? I don't have a maven repository and I don't want to set one up myself for just these 3rd party libraries. More people must be using the HDF libraries and want to use them as part of a maven project. http://www.hdfgroup.org/hdf-java-html/ Is it even possible to put native c++ libraries (*.so files) in to a maven repository? Or is there another way that I can put them into a maven project in

Test group existence in hdf5/c++

≯℡__Kan透↙ 提交于 2019-12-22 02:03:20
问题 I am opening an existing HDF5 file for appending data; I want to assure that group called /A exists for subsequent access. I am looking for an easy way to either create /A conditionally (create and return new group if not existing, or return the existing group). One way is to test for /A existence. How can I do it efficiently? According to the API docs, I can do something like this: H5::H5File h5file(filename,H5F_ACC_RDWR); H5::H5Group grp; try{ grp=h5file.openGroup("A"); } catch(H5:

How do I read/write to a subgroup withing a HDF5Store?

断了今生、忘了曾经 提交于 2019-12-22 00:24:31
问题 I am using the HDF5Store, to store some of my processed results, prior to analysis. Into the store I want to put 3 types of results, Raw results, that have not been processed at all, just read-in and merged from their original CSV formats Processed results that are derived from the raw results, that have some proccessing and division into more logical groupings Summarised results that have useful summery columns added and redundant columns removed, for easy reading. I thought a HDF5Store with

How to share memory from an HDF5 dataset with a NumPy ndarray

ぃ、小莉子 提交于 2019-12-21 20:01:41
问题 I am writing an application for streaming data from a sensor, and then processing the data in various ways. These processing components include visualizing the data, some number crunching (linear algebra), and also writing the data to disk in an HDF5 format. Ideally each of these components will be its own module, all run in the same Python process so that IPC is not an issue. This leads me to the question of how to efficiently store the streaming data. The datasets are quite large (~5Gb),

hdf5 and ndarray append / time-efficient approach for large data-sets

冷暖自知 提交于 2019-12-21 05:38:33
问题 Background I have a k n-dimensional time-series, each represented as m x (n+1) array holding float values (n columns plus one that represents the date). Example: k (around 4 million) time-series that look like 20100101 0.12 0.34 0.45 ... 20100105 0.45 0.43 0.21 ... ... ... ... ... Each day, I want to add for a subset of the data sets (< k) an additional row. All datasets are stored in groups in one hd5f file. Question What is the most time-efficient approach to append the rows to the data

HDF5 struct with pointer array

我与影子孤独终老i 提交于 2019-12-21 02:28:54
问题 I am trying to write a HDF5 file with a structure which contains an int and a float* typedef struct s1_t { int a; float *b; } s1_t; However, upon allocating the float* and putting values into it, I still can't output the data in my hdf5 file. I believe this is because the write function assumes that the compound data type is contiguous when a dynamically allocated array will not be. Is there any way around this problem by still using a pointer array? /* * This example shows how to create a

how to import .mat-v7.3 file using h5py

梦想与她 提交于 2019-12-20 04:51:17
问题 I have .mat file which have 3 matrixes A, B, C. Actually I used scipy.io to import this mat file as below. data = sio.loadmat('/data.mat') A = data['A'] B = data['B'] C = data['C'] But, v7.3 file cannot import using this way. So, I tried to import using h5py but I don't know how to use h5py. My code is as below. f = h5py.File('/data.mat', 'r') A = f.get('/A') A = np.array('A') Which part is wrong? Thank you! 回答1: In Octave >> A = [1,2,3;4,5,6]; >> B = [1,2,3,4]; >> save -hdf5 abc.h5 A B In

how to import .mat-v7.3 file using h5py

有些话、适合烂在心里 提交于 2019-12-20 04:51:09
问题 I have .mat file which have 3 matrixes A, B, C. Actually I used scipy.io to import this mat file as below. data = sio.loadmat('/data.mat') A = data['A'] B = data['B'] C = data['C'] But, v7.3 file cannot import using this way. So, I tried to import using h5py but I don't know how to use h5py. My code is as below. f = h5py.File('/data.mat', 'r') A = f.get('/A') A = np.array('A') Which part is wrong? Thank you! 回答1: In Octave >> A = [1,2,3;4,5,6]; >> B = [1,2,3,4]; >> save -hdf5 abc.h5 A B In