hdmf.backends.hdf5.h5tools module¶
- class hdmf.backends.hdf5.h5tools.HDF5IO(path=None, mode='r', manager=None, comm=None, file=None, driver=None)¶
Bases:
HDMFIOOpen an HDF5 file for IO.
- Parameters
mode (
str) – the mode to open the HDF5 file with, one of (“w”, “r”, “r+”, “a”, “w-”, “x”). See h5py.File for more details.manager (
TypeMaporBuildManager) – the BuildManager or a TypeMap to construct a BuildManager to use for I/Ocomm (Intracomm) – the MPI communicator to use for parallel I/O
file (
Fileor S3File) – a pre-existing h5py.File objectdriver (
str) – driver for h5py to use when opening HDF5 file
- property comm¶
The MPI communicator to use for parallel I/O.
- property driver¶
- classmethod load_namespaces(namespace_catalog, path=None, namespaces=None, file=None, driver=None)¶
Load cached namespaces from a file.
If file is not supplied, then an
h5py.Fileobject will be opened for the given path, the namespaces will be read, and the File object will be closed. If file is supplied, then the given File object will be read from and not closed.- raises ValueError
if both path and file are supplied but path is not the same as the path of file.
- Parameters
namespace_catalog (
NamespaceCatalogorTypeMap) – the NamespaceCatalog or TypeMap to load namespaces intonamespaces (
list) – the namespaces to loadfile (
File) – a pre-existing h5py.File objectdriver (
str) – driver for h5py to use when opening HDF5 file
- Returns
dict mapping the names of the loaded namespaces to a dict mapping included namespace names and the included data types
- Return type
- classmethod get_namespaces(path=None, file=None, driver=None)¶
Get the names and versions of the cached namespaces from a file.
If
fileis not supplied, then anh5py.Fileobject will be opened for the givenpath, the namespaces will be read, and the File object will be closed. If file is supplied, then the given File object will be read from and not closed.If there are multiple versions of a namespace cached in the file, then only the latest one (using alphanumeric ordering) is returned. This is the version of the namespace that is loaded by HDF5IO.load_namespaces(…).
- raises ValueError
if both path and file are supplied but path is not the same as the path of file.
- classmethod copy_file(source_filename, dest_filename, expand_external=True, expand_refs=False, expand_soft=False)¶
Convenience function to copy an HDF5 file while allowing external links to be resolved.
Warning
As of HDMF 2.0, this method is no longer supported and may be removed in a future version. Please use the export method or h5py.File.copy method instead.
Note
The source file will be opened in ‘r’ mode and the destination file will be opened in ‘w’ mode using h5py. To avoid possible collisions, care should be taken that, e.g., the source file is not opened already when calling this function.
- Parameters
source_filename (
str) – the path to the HDF5 file to copydest_filename (
str) – the name of the destination fileexpand_external (
bool) – expand external links into new objectsexpand_refs (
bool) – copy objects which are pointed to by referenceexpand_soft (
bool) – expand soft links into new objects
- write(container, cache_spec=True, link_data=True, exhaust_dci=True)¶
Write the container to an HDF5 file.
- Parameters
container (
Container) – the Container object to writecache_spec (
bool) – If True (default), cache specification to file (highly recommended). If False, do not cache specification to file. The appropriate specification will then need to be loaded prior to reading the file.link_data (
bool) – If True (default), create external links to HDF5 Datasets. If False, copy HDF5 Datasets.exhaust_dci (
bool) – If True (default), exhaust DataChunkIterators one at a time. If False, exhaust them concurrently.
- export(src_io, container=None, write_args=None, cache_spec=True)¶
Export data read from a file from any backend to HDF5.
See
hdmf.backends.io.HDMFIO.exportfor more details.- Parameters
src_io (HDMFIO) – the HDMFIO object for reading the data to export
container (
Container) – the Container object to export. If None, then the entire contents of the HDMFIO object will be exportedwrite_args (
dict) – arguments to pass towrite_buildercache_spec (
bool) – whether to cache the specification to file
- classmethod export_io(path, src_io, comm=None, container=None, write_args=None, cache_spec=True)¶
Export from one backend to HDF5 (class method).
- Parameters
path (
str) – the path to the destination HDF5 filesrc_io (HDMFIO) – the HDMFIO object for reading the data to export
comm (Intracomm) – the MPI communicator to use for parallel I/O
container (
Container) – the Container object to export. If None, then the entire contents of the HDMFIO object will be exportedwrite_args (
dict) – arguments to pass towrite_buildercache_spec (
bool) – whether to cache the specification to file
- read(load_sidecar=True)¶
Read a container from the IO source.
- Parameters
load_sidecar (bool) – Whether to load the sidecar JSON file and use the modifications specified in that file, if the file exists. The file must have the same base name as the original source file but have the suffix .json.
- Returns
the Container object that was read in
- Return type
- read_builder()¶
Read data and return the GroupBuilder representing it.
NOTE: On read, the Builder.source may will usually not be set of the Builders. NOTE: The Builder.location is used internally to ensure correct handling of links (in particular on export) and should be set on read for all GroupBuilder, DatasetBuilder, and LinkBuilder objects.
- Returns
a GroupBuilder representing the data object
- Return type
- get_written(builder)¶
Return True if this builder has been written to (or read from) disk by this IO object, False otherwise.
- Parameters
builder (Builder) – Builder object to get the written flag for
- Returns
True if the builder is found in self._written_builders using the builder ID, False otherwise
- get_builder(h5obj)¶
Get the builder for the corresponding h5py Group or Dataset
- raises ValueError
When no builder has been constructed yet for the given h5py object
- Parameters
h5obj (
DatasetorGroup) – the HDF5 object to the corresponding Builder object for
- get_container(h5obj)¶
Get the container for the corresponding h5py Group or Dataset
- raises ValueError
When no builder has been constructed yet for the given h5py object
- Parameters
h5obj (
DatasetorGroup) – the HDF5 object to the corresponding Container/Data object for
- open()¶
Open this HDMFIO object for writing of the builder
- close(close_links=True)¶
Close this file and any files linked to from this file.
- Parameters
close_links (bool) – Whether to close all files linked to from this file. (default: True)
- close_linked_files()¶
Close all opened, linked-to files.
MacOS and Linux automatically release the linked-to file after the linking file is closed, but Windows does not, which prevents the linked-to file from being deleted or truncated. Use this method to close all opened, linked-to files.
- write_builder(builder, link_data=True, exhaust_dci=True, export_source=None)¶
- Parameters
builder (
GroupBuilder) – the GroupBuilder object representing the HDF5 filelink_data (
bool) – If not specified otherwise link (True) or copy (False) HDF5 Datasetsexhaust_dci (
bool) – exhaust DataChunkIterators one at a time. If False, exhaust them concurrentlyexport_source (
str) – The source of the builders when exporting
- classmethod get_type(data)¶
- set_attributes(obj, attributes)¶
- Parameters
obj (
GrouporDataset) – the HDF5 object to add attributes toattributes (
dict) – a dict containing the attributes on the Group or Dataset, indexed by attribute name
- write_group(parent, builder, link_data=True, exhaust_dci=True, export_source=None)¶
- Parameters
parent (
Group) – the parent HDF5 objectbuilder (
GroupBuilder) – the GroupBuilder to writelink_data (
bool) – If not specified otherwise link (True) or copy (False) HDF5 Datasetsexhaust_dci (
bool) – exhaust DataChunkIterators one at a time. If False, exhaust them concurrentlyexport_source (
str) – The source of the builders when exporting
- Returns
the Group that was created
- Return type
Group
- write_link(parent, builder)¶
- Parameters
parent (
Group) – the parent HDF5 objectbuilder (
LinkBuilder) – the LinkBuilder to write
- Returns
the Link that was created
- Return type
Link
- write_dataset(parent, builder, link_data=True, exhaust_dci=True, export_source=None)¶
Write a dataset to HDF5
The function uses other dataset-dependent write functions, e.g,
__scalar_fill__,__list_fill__, and__setup_chunked_dset__to write the data.- Parameters
parent (
Group) – the parent HDF5 objectbuilder (
DatasetBuilder) – the DatasetBuilder to writelink_data (
bool) – If not specified otherwise link (True) or copy (False) HDF5 Datasetsexhaust_dci (
bool) – exhaust DataChunkIterators one at a time. If False, exhaust them concurrentlyexport_source (
str) – The source of the builders when exporting
- Returns
the Dataset that was created
- Return type
Dataset
- property mode¶
Return the HDF5 file mode. One of (“w”, “r”, “r+”, “a”, “w-”, “x”).
- classmethod set_dataio(data=None, maxshape=None, chunks=None, compression=None, compression_opts=None, fillvalue=None, shuffle=None, fletcher32=None, link_data=False, allow_plugin_filters=False, shape=None, dtype=None)¶
Wrap the given Data object with an H5DataIO.
This method is provided merely for convenience. It is the equivalent of the following:
from hdmf.backends.hdf5 import H5DataIO data = ... data = H5DataIO(data)
- Parameters
data (
ndarrayorlistortupleorDatasetorIterable) – the data to be written. NOTE: If an h5py.Dataset is used, all other settings but link_data will be ignored as the dataset will either be linked to or copied as is in H5DataIO.maxshape (
tuple) – Dataset will be resizable up to this shape (Tuple). Automatically enables chunking.Use None for the axes you want to be unlimited.chunks (
boolortuple) – Chunk shape or True to enable auto-chunkingcompression (
strorboolorint) – Compression strategy. If a bool is given, then gzip compression will be used by default.http://docs.h5py.org/en/latest/high/dataset.html#dataset-compressioncompression_opts (
intortuple) – Parameter for compression filterfillvalue (None) – Value to be returned when reading uninitialized parts of the dataset
shuffle (
bool) – Enable shuffle I/O filter. http://docs.h5py.org/en/latest/high/dataset.html#dataset-shufflefletcher32 (
bool) – Enable fletcher32 checksum. http://docs.h5py.org/en/latest/high/dataset.html#dataset-fletcher32link_data (
bool) – If data is an h5py.Dataset should it be linked to or copied. NOTE: This parameter is only allowed if data is an h5py.Datasetallow_plugin_filters (
bool) – Enable passing dynamically loaded filters as compression parametershape (
tuple) – the shape of the new dataset, used only if data is Nonedtype (
strortypeordtype) – the data type of the new dataset, used only if data is None