msl.io.group module
A Group
can contain sub-Group
s and/or Dataset
s.
- class msl.io.group.Group(name, parent, read_only, **metadata)[source]
Bases:
Vertex
A
Group
can contain sub-Group
s and/orDataset
s.Do not instantiate directly. Create a new
Group
usingcreate_group()
.- Parameters:
name (
str
) – The name of thisGroup
. Uses a naming convention analogous to UNIX file systems where eachGroup
can be thought of as a directory and where every subdirectory is separated from its parent directory by the'/'
character.read_only (
bool
) – Whether theGroup
is to be accessed in read-only mode.**metadata – Key-value pairs that are used to create the
Metadata
for thisGroup
.
- add_dataset(name, dataset)[source]
Add a
Dataset
.Automatically creates the ancestor
Group
s if they do not exist.
- add_dataset_logging(name, dataset_logging)[source]
Add a
DatasetLogging
.Automatically creates the ancestor
Group
s if they do not exist.- Parameters:
name (
str
) – The name of the newDatasetLogging
to add.dataset_logging (
DatasetLogging
) – TheDatasetLogging
to add. TheDatasetLogging
and theMetadata
are copied.
- add_group(name, group)[source]
Add a
Group
.Automatically creates the ancestor
Group
s if they do not exist.
- create_dataset(name, read_only=None, **kwargs)[source]
Create a new
Dataset
.Automatically creates the ancestor
Group
s if they do not exist.
- create_dataset_logging(name, level='INFO', attributes=None, logger=None, date_fmt=None, **kwargs)[source]
Create a
Dataset
that handleslogging
records.Automatically creates the ancestor
Group
s if they do not exist.- Parameters:
level (
int
orstr
, optional) – The logging level to use.attributes (
list
ortuple
ofstr
, optional) – The attribute names to include in theDataset
for each logging record. IfNone
then usesasctime
,levelname
,name
, andmessage
.logger (
Logger
, optional) – TheLogger
that theDatasetLogging
object will be added to. IfNone
then it is added to theroot
Logger
.date_fmt (
str
, optional) – Thedatetime
format code to use to represent theasctime
attribute in. IfNone
then uses the ISO 8601 format'%Y-%m-%dT%H:%M:%S.%f'
.**kwargs – Additional keyword arguments are passed to
Dataset
. The default behaviour is to append every logging record to theDataset
. This guarantees that the size of theDataset
is equal to the number of logging records that were added to it. However, this behaviour can decrease the performance if many logging records are added often because a copy of the data in theDataset
is created for each logging record that is added. You can improve the performance by specifying an initial size of theDataset
by including a shape or a size keyword argument. This will also automatically create additional empty rows in theDataset
, that is proportional to the size of theDataset
, if the size of theDataset
needs to be increased. If you do this then you will want to callremove_empty_rows()
before writingDatasetLogging
to a file or interacting with the data inDatasetLogging
to remove the extra rows that were created.
- Returns:
DatasetLogging
– TheDatasetLogging
that was created.
Examples
>>> import logging >>> from msl.io import JSONWriter >>> logger = logging.getLogger('my_logger') >>> root = JSONWriter() >>> log_dset = root.create_dataset_logging('log') >>> logger.info('hi') >>> logger.error('cannot do that!') >>> log_dset.data array([(..., 'INFO', 'my_logger', 'hi'), (..., 'ERROR', 'my_logger', 'cannot do that!')], dtype=[('asctime', 'O'), ('levelname', 'O'), ('name', 'O'), ('message', 'O')])
Get all
ERROR
logging records>>> errors = log_dset[log_dset['levelname'] == 'ERROR'] >>> print(errors) [(..., 'ERROR', 'my_logger', 'cannot do that!')]
Stop the
DatasetLogging
object from receiving logging records>>> log_dset.remove_handler()
- create_group(name, read_only=None, **metadata)[source]
Create a new
Group
.Automatically creates the ancestor
Group
s if they do not exist.- Parameters:
- Returns:
- datasets(exclude=None, include=None, flags=0)[source]
Get the
Dataset
s in thisGroup
.- Parameters:
exclude (
str
, optional) – A regex pattern to use to excludeDataset
s. There.search()
function is used to compare the exclude regex pattern with the name of eachDataset
. If there is a match, theDataset
is not yielded.include (
str
, optional) – A regex pattern to use to includeDataset
s. There.search()
function is used to compare the include regex pattern with the name of eachDataset
. If there is a match, theDataset
is yielded.flags (
int
, optional) – Regex flags that are passed tore.compile()
.
- Yields:
Dataset
– The filteredDataset
s based on the exclude and include regex patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.
- groups(exclude=None, include=None, flags=0)[source]
Get the sub-
Group
s of thisGroup
.- Parameters:
exclude (
str
, optional) – A regex pattern to use to excludeGroup
s. There.search()
function is used to compare the exclude regex pattern with the name of eachGroup
. If there is a match, theGroup
is not yielded.include (
str
, optional) – A regex pattern to use to includeGroup
s. There.search()
function is used to compare the include regex pattern with the name of eachGroup
. If there is a match, theGroup
is yielded.flags (
int
, optional) – Regex flags that are passed tore.compile()
.
- Yields:
Group
– The filteredGroup
s based on the exclude and include regex patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.
- static is_dataset_logging(obj)[source]
Test whether an object is a
DatasetLogging
.- Parameters:
obj (
object
) – The object to test.- Returns:
bool
– Whether obj is an instance ofDatasetLogging
.
- require_dataset(name, read_only=None, **kwargs)[source]
Require that a
Dataset
exists.If the
Dataset
exists then it will be returned if it does not exist then it is created.Automatically creates the ancestor
Group
s if they do not exist.
- require_dataset_logging(name, level='INFO', attributes=None, logger=None, date_fmt=None, **kwargs)[source]
Require that a
Dataset
exists for handlinglogging
records.If the
DatasetLogging
exists then it will be returned if it does not exist then it is created.Automatically creates the ancestor
Group
s if they do not exist.- Parameters:
level (
int
orstr
, optional) – The logging level to use.attributes (
list
ortuple
ofstr
, optional) – The attribute names to include in theDataset
for each logging record. If theDataset
exists and if attributes are specified, and they do not match those of the existingDataset
, then aValueError
is raised. IfNone
and theDataset
does not exist then usesasctime
,levelname
,name
, andmessage
.logger (
Logger
, optional) – TheLogger
that theDatasetLogging
object will be added to. IfNone
then it is added to theroot
Logger
.date_fmt (
str
, optional) – Thedatetime
format code to use to represent theasctime
attribute in. IfNone
then uses the ISO 8601 format'%Y-%m-%dT%H:%M:%S.%f'
.**kwargs – Additional keyword arguments are passed to
Dataset
. The default behaviour is to append every logging record to theDataset
. This guarantees that the size of theDataset
is equal to the number of logging records that were added to it. However, this behaviour can decrease the performance if many logging records are added often because a copy of the data in theDataset
is created for each logging record that is added. You can improve the performance by specifying an initial size of theDataset
by including a shape or a size keyword argument. This will also automatically create additional empty rows in theDataset
, that is proportional to the size of theDataset
, if the size of theDataset
needs to be increased. If you do this then you will want to callremove_empty_rows()
before writingDatasetLogging
to a file or interacting with the data inDatasetLogging
to remove the extra rows that were created.
- Returns:
DatasetLogging
– TheDatasetLogging
that was created or that already existed.