hdf5

Read or Write a compound datatype with h5py in python

徘徊边缘 提交于 2019-12-20 03:28:11
问题 I want to use hdf5 file among some C++, matlab, and python code. My h5 file works well in C++ and matlab, but cannot be read with h5py. Is data types like H5T_STD_B64LE not well supported by h5py? Thanks! In [2]: f = h5py.File('art.mips.log.h5', 'r') In [3]: f.keys() Out[3]: [u'mem'] In [4]: f['mem'] Out[4]: <repr(<h5py._hl.dataset.Dataset at 0x29f70d0>) failed: TypeError: No NumPy equivalent for TypeBitfieldID exists> The hdf5 file format is as follows: $ h5dump -H art.mips.log.h5 HDF5 "art

Getting Started with hdf5 Java library

淺唱寂寞╮ 提交于 2019-12-20 02:11:30
问题 I am learning HDF5 with jhdf5. I am working on MAC OS_X. brew install hdf5 This installs hdf5-1.10 in /usr/local/Cellar/hdf5 Copy this file and put it in gradle project. https://support.hdfgroup.org/ftp/HDF5/hdf-java/hdf-java-examples/jnative/h5/HDF5FileCreate.java This is the most basic java example file. Add this dependency in gradle compile group: 'org.hdfgroup', name: 'hdf-java', version: '2.6.1' Update package import statements by adding ncsa in front. Run it. And I got this error java

Installing h5py on OS X

。_饼干妹妹 提交于 2019-12-19 18:29:15
问题 I've spent the day trying to get the h5py module of python working, but without success. I've installed HDF5 shared libraries, followed the instructions I could find on the web to get it right. But it doesn't work, below is the error message I get when trying to import the module into python. I tried installing through MacPorts too but again it wouldnt work. I'm using Python27 32 bits (had too for another module, and thus installed the i386 HDF5 library... if that's right?) Any help very

C++ building error for a simple code using armadillo and hdf5 libraries

被刻印的时光 ゝ 提交于 2019-12-19 10:38:08
问题 I'm quite new to C++ and armadillo, and I get stuck with a building error describe below. I'm trying to test the following simple code to save an armadillo matrix as hdf5 file: #include <iostream> #include <armadillo> using namespace std; using namespace arma; int main() { mat A = randu<mat>(240,320); A.save("A.hdf5",hdf5_binary); return 0; } When compiling, I get the following errors: /usr/include/armadillo_bits/hdf5_misc.hpp:131: undefined reference in « arma_H5T_NATIVE_DOUBLE » /usr

Release hdf5 disk memory after table or node removal with pytables or pandas

非 Y 不嫁゛ 提交于 2019-12-19 08:09:09
问题 I'm using HDFStore with pandas / pytables. After removing a table or object, hdf5 file size remains unaffected. It seems this space is reused afterwards when additional objects are added to store, but it can be an issue if large space is wasted. I have not found any command in pandas nor pytables APIs that might be used to recover hdf5 memory. Do you know of any mechanism to improve data management in hdf5 files? 回答1: see here you need to ptrepack it, which rewrites the file. ptrepack -

HDF5 rowmajor or colmajor

时光怂恿深爱的人放手 提交于 2019-12-19 07:08:09
问题 Is it possible to know if a matrix stored in HDF5 format is in RowMajor or ColMajor? For example when I save matrices from octave, which stores them internally as ColMajor, I need to transpose them when I read them in my C code where matrices are stored in RowMajor, and vice versa. 回答1: HDF5 stores data in row major order: HDF5 uses C storage conventions, assuming that the last listed dimension is the fastest-changing dimension and the first-listed dimension is the slowest changing. from the

How to differentiate between HDF5 datasets and groups with h5py?

核能气质少年 提交于 2019-12-18 12:23:51
问题 I use the Python package h5py (version 2.5.0) to access my hdf5 files. I want to traverse the content of a file and do something with every dataset. Using the visit method: import h5py def print_it(name): dset = f[name] print(dset) print(type(dset)) with h5py.File('test.hdf5', 'r') as f: f.visit(print_it) for a test file I obtain: <HDF5 group "/x" (1 members)> <class 'h5py._hl.group.Group'> <HDF5 dataset "y": shape (100, 100, 100), type "<f8"> <class 'h5py._hl.dataset.Dataset'> which tells me

What are the disadvantages of using .Rdata files compared to HDF5 or netCDF?

隐身守侯 提交于 2019-12-18 10:50:49
问题 I have been asked to change a software that currently exports .Rdata files so that it exports in a 'platform independent binary format' such as HDF5 or netCDF. Two reasons were given: Rdata files can only be read by R binary information is stored differently depending on operating systems or architecture I also found that the "R Data import export manual" does not discuss Rdata files although it does discuss HDF5 and netCDF. A discussion on R-help suggests that .Rdata files are platform

Saving in a file an array or DataFrame together with other information

£可爱£侵袭症+ 提交于 2019-12-18 10:04:07
问题 The statistical software Stata allows short text snippets to be saved within a dataset. This is accomplished either using notes and/or characteristics. This is a feature of great value to me as it allows me to save a variety of information, ranging from reminders and to-do lists to information about how I generated the data, or even what the estimation method for a particular variable was. I am now trying to come up with a similar functionality in Python 3.6. So far, I have looked online and

Writing to compound dataset with variable length string via h5py (HDF5)

社会主义新天地 提交于 2019-12-18 09:29:12
问题 I've been able to create a compound dataset consisting of an unsigned int and a variable-length string in my HDF5 file using h5py, but I can't write to it. dt = h5py.special_dtype(vlen=str) dset = fout.create_dataset(ver, (1,), dtype=np.dtype([("time", np.uint64),("value", dt)])) I've written to other compound datasets fairly easily, by setting the specific column(s) of the compound dataset as equal to an existing numpy array. Now where I run into trouble is with writing to the compound