msl.io.group module

A Group can contain sub-Groups and/or Datasets.

class msl.io.group.Group(name, parent, read_only, **metadata)[source]

Bases: Vertex

A Group can contain sub-Groups and/or Datasets.

Do not instantiate directly. Create a new Group using create_group().

Parameters:
  • name (str) – The name of this Group. Uses a naming convention analogous to UNIX file systems where each Group can be thought of as a directory and where every subdirectory is separated from its parent directory by the '/' character.

  • parent (Group) – The parent Group to this Group.

  • read_only (bool) – Whether the Group is to be accessed in read-only mode.

  • **metadata – Key-value pairs that are used to create the Metadata for this Group.

add_dataset(name, dataset)[source]

Add a Dataset.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
add_dataset_logging(name, dataset_logging)[source]

Add a DatasetLogging.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
add_group(name, group)[source]

Add a Group.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
ancestors()[source]

Get all ancestor (parent) Groups of this Group.

Yields:

Group – The ancestors of this Group.

create_dataset(name, read_only=None, **kwargs)[source]

Create a new Dataset.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
  • name (str) – The name of the new Dataset.

  • read_only (bool, optional) – Whether to create this Dataset in read-only mode. If None then uses the mode for this Group.

  • **kwargs – Key-value pairs that are passed to Dataset.

Returns:

Dataset – The new Dataset that was created.

create_dataset_logging(name, level='INFO', attributes=None, logger=None, date_fmt=None, **kwargs)[source]

Create a Dataset that handles logging records.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
Returns:

DatasetLogging – The DatasetLogging that was created.

Examples

>>> import logging
>>> from msl.io import JSONWriter
>>> logger = logging.getLogger('my_logger')
>>> root = JSONWriter()
>>> log_dset = root.create_dataset_logging('log')
>>> logger.info('hi')
>>> logger.error('cannot do that!')
>>> log_dset.data
array([(..., 'INFO', 'my_logger', 'hi'), (..., 'ERROR', 'my_logger', 'cannot do that!')],
      dtype=[('asctime', 'O'), ('levelname', 'O'), ('name', 'O'), ('message', 'O')])

Get all ERROR logging records

>>> errors = log_dset[log_dset['levelname'] == 'ERROR']
>>> print(errors)
[(..., 'ERROR', 'my_logger', 'cannot do that!')]

Stop the DatasetLogging object from receiving logging records

>>> log_dset.remove_handler()
create_group(name, read_only=None, **metadata)[source]

Create a new Group.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
  • name (str) – The name of the new Group.

  • read_only (bool, optional) – Whether to create this Group in read-only mode. If None then uses the mode for this Group.

  • **metadata – Key-value pairs that are used to create the Metadata for this Group.

Returns:

Group – The new Group that was created.

datasets(exclude=None, include=None, flags=0)[source]

Get the Datasets in this Group.

Parameters:
  • exclude (str, optional) – A regex pattern to use to exclude Datasets. The re.search() function is used to compare the exclude regex pattern with the name of each Dataset. If there is a match, the Dataset is not yielded.

  • include (str, optional) – A regex pattern to use to include Datasets. The re.search() function is used to compare the include regex pattern with the name of each Dataset. If there is a match, the Dataset is yielded.

  • flags (int, optional) – Regex flags that are passed to re.compile().

Yields:

Dataset – The filtered Datasets based on the exclude and include regex patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.

descendants()[source]

Get all descendant (children) Groups of this Group.

Yields:

Group – The descendants of this Group.

groups(exclude=None, include=None, flags=0)[source]

Get the sub-Groups of this Group.

Parameters:
  • exclude (str, optional) – A regex pattern to use to exclude Groups. The re.search() function is used to compare the exclude regex pattern with the name of each Group. If there is a match, the Group is not yielded.

  • include (str, optional) – A regex pattern to use to include Groups. The re.search() function is used to compare the include regex pattern with the name of each Group. If there is a match, the Group is yielded.

  • flags (int, optional) – Regex flags that are passed to re.compile().

Yields:

Group – The filtered Groups based on the exclude and include regex patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.

static is_dataset(obj)[source]

Test whether an object is a Dataset.

Parameters:

obj (object) – The object to test.

Returns:

bool – Whether obj is an instance of Dataset.

static is_dataset_logging(obj)[source]

Test whether an object is a DatasetLogging.

Parameters:

obj (object) – The object to test.

Returns:

bool – Whether obj is an instance of DatasetLogging.

static is_group(obj)[source]

Test whether an object is a Group.

Parameters:

obj (object) – The object to test.

Returns:

bool – Whether obj is an instance of Group.

remove(name)[source]

Remove a Group or a Dataset.

Parameters:

name (str) – The name of the Group or Dataset to remove.

Returns:

Group, Dataset or None – The Group or Dataset that was removed or None if there was no Group or Dataset with the specified name.

require_dataset(name, read_only=None, **kwargs)[source]

Require that a Dataset exists.

If the Dataset exists then it will be returned if it does not exist then it is created.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
  • name (str) – The name of the Dataset.

  • read_only (bool, optional) – Whether to create this Dataset in read-only mode. If None then uses the mode for this Group.

  • **kwargs – Key-value pairs that are passed to Dataset.

Returns:

Dataset – The Dataset that was created or that already existed.

require_dataset_logging(name, level='INFO', attributes=None, logger=None, date_fmt=None, **kwargs)[source]

Require that a Dataset exists for handling logging records.

If the DatasetLogging exists then it will be returned if it does not exist then it is created.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
Returns:

DatasetLogging – The DatasetLogging that was created or that already existed.

require_group(name, read_only=None, **metadata)[source]

Require that a Group exists.

If the Group exists then it will be returned if it does not exist then it is created.

Automatically creates the ancestor Groups if they do not exist.

Parameters:
  • name (str) – The name of the Group.

  • read_only (bool, optional) – Whether to return the Group in read-only mode. If None then uses the mode for this Group.

  • **metadata – Key-value pairs that are used as Metadata for this Group.

Returns:

Group – The Group that was created or that already existed.