The Trajectory, Single Runs and Group Nodes

Trajectory

class pypet.trajectory.Trajectory(name='my_trajectory', add_time=True, comment='', dynamically_imported_classes=None, filename=None)[source]

The trajectory manages results and parameters.

The trajectory is the container to interact with before a simulation. During a run you work with SingleRun instances created by the parent trajectory. They do not differ much from the parent trajectory, but they provide less functionality.

You can add four types of data to the trajectory:

  • Config:

    These are special parameters specifying modalities of how to run your simulations. Changing a config parameter should NOT have any influence on the results you obtain from your simulations.

    They specify runtime environment parameters like how many CPUs you use for multiprocessing etc.

    In fact, if you use the default runtime environment of this project, the environment will add some config parameters to your trajectory.

    The method to add more config is f_add_config()

    Config parameters are put into the subtree traj.config (with traj being your trajectory instance).

  • Parameters:

    These are your primary ammunition in numerical simulations. They specify how your simulation works. They can only be added before the actual running of the simulation exploring the parameter space. They can be added via f_add_parameter() and be explored using f_explore(). Or to expand an existing trajectory use f_expand().

    Your parameters should encompass all values that completely define your simulation, I recommend also storing random number generator seeds as parameters to guarantee that a simulation can be repeated exactly the way it was run the first time.

    Parameters are put into the subtree traj.parameters.

  • Derived Parameters:

    They are not much different from parameters except that they can be added anytime.

    Conceptually this encompasses stuff that is intermediately computed from the original parameters. For instance, as your original parameters you have a random number seed and some other parameters. From these you compute a connection matrix for a neural network. This connection matrix could be stored as a derived parameter.

    Derived parameters are added via f_add_derived_parameter().

    Derived parameters are put into the subtree traj.derived_parameters. They are further sorted into traj.derived_parameters.runs.run_XXXXXXXX if they were added during a single run. XXXXXXXX is replaced by the index of the corresponding run, for example run_00000001.

  • Results:

    Result are added via the f_add_result(). They are kept under the subtree traj.results and are further sorted into traj.results.runs.run_XXXXXXXX if they are added during a single run.

There are several ways to access the parameters and results, to learn about these, fast access, and natural naming see Accessing Data in the Trajectory.

In case you create a new trajectory you can pass the following arguments:

Parameters:
  • name – Name of the trajectory, if add_time=True the current time is added as a string to the parameter name.
  • add_time – Boolean whether to add the current time in human readable format to the trajectory name.
  • comment – A useful comment describing the trajectory.
  • dynamically_imported_classes

    If you’ve written a custom parameter that needs to be loaded dynamically during runtime, this needs to be specified here as a list of classes or strings naming classes and there module paths. For example: dynamically_imported_classes = [‘pypet.parameter.PickleParameter’,MyCustomParameter]

    If you only have a single class to import, you do not need the list brackets: dynamically_imported_classes = ‘pypet.parameter.PickleParameter’

  • filename – If you want to use the default rageService, you can specify the filename of the HDF5 file. If you specify the filename, the trajectory will automatically create the corresponding service object.
Raises:

ValueError: If the name of the trajectory contains invalid characters.

TypeError: If the dynamically imported classes are not classes or strings.

Example usage:

>>> traj = Trajectory('ExampleTrajectory',dynamically_imported_classes=['Some.custom.class'],    comment = 'I am a neat example!', filename='experiment.hdf5', file_title='Experiments')
f_add_to_dynamic_imports(dynamically_imported_classes)[source]

Adds classes or paths to classes to the trajectory to create custom parameters.

Parameters:dynamically_imported_classes

If you’ve written custom parameter that needs to be loaded dynamically during runtime, this needs to be specified here as a list of classes or strings naming classes and there module paths. For example: dynamically_imported_classes = [‘pypet.parameter.PickleParameter’,MyCustomParameter]

If you only have a single class to import, you do not need the list brackets: dynamically_imported_classes = ‘pypet.parameter.PickleParameter’

f_as_run(name_or_idx)[source]

Can make the trajectory behave like a single run, for easier data analysis.

Has the following effects:
  • v_idx and v_as_run are set to the appropriate index and run name

  • All explored parameters are set to the corresponding value in the exploration ranges, i.e. when you call f_get() (or fast access) on them you will get in return the value at the corresponding v_idx position in the exploration range.

  • If you perform a search in the trajectory tree, the trajectory will only search the run subtree under results and derived_parameters with the corresponding index. For instance, if you use f_as_run(‘run_00000007’) or f_as_run(7) and search for traj.results.z this will search for z only in the subtree traj.results.run_00000007. Yet, you can still explicitly name other subtrees, i.e. traj.results.run_00000004.z will still work.

    Note that this functionality also effects the iterator functions f_iter_nodes() and f_iter_leaves().

f_backup(backup_filename=None)[source]

Backs up the trajectory with the given storage service.

Parameters:backup_filename

Name of file where to store the backup.

In case you use the standard HDF5 storage service and backup_filename=None, the file will be chosen automatically. The backup file will be in the same folder as your hdf5 file and named ‘backup_XXXXX.hdf5’ where ‘XXXXX’ is the name of your current trajectory.

f_expand(build_dict)[source]

Similar to f_explore(), but can be used to enlarge already completed trajectories.

Raises:

TypeError: If not all explored parameters are enlarged

AttributeError: If keys of dictionary cannot be found in the trajectory

NotUniqueNodeError:

If dictionary keys do not unambiguously map to single parameters

ValueError: If not all explored parameter ranges are of the same length

f_explore(build_dict)[source]

Prepares the trajectory to explore the parameter space.

To explore the parameter space you need to provide a dictionary with the names of the parameters to explore as keys and iterables specifying the exploration ranges as values.

All iterables need to have the same length otherwise a ValueError is raised. A ValueError is also raised if the names from the dictionary map to groups or results and not parameters.

If your trajectory is already explored but not stored yet and your parameters are not locked you can add new explored parameters to the current ones if their iterables match the current length of the trajectory.

Raises an AttributeError if the names from the dictionary are not found at all in the trajectory and NotUniqueNodeError if the keys not unambiguously map to single parameters.

Raises a TypeError if the trajectory has been stored already, please use f_expand() then instead.

Example usage:

>>> traj.explore({'groupA.param1' : [1,2,3,4,5], 'groupA.param2':['a','b','c','d','e']})

NOTE:

Since parameters are very conservative regarding the data they accept (see Values supported by Parameters), you sometimes won’t be able to use Numpy arrays for exploration as iterables.

For instance, the following code snippet won’t work:

import numpy a np
from pypet.trajectory import Trajectory
traj = Trajectory()
traj.f_add_parameter('my_float_parameter', 42.4,
                     comment='My value is a standard python float')

traj.f_explore( { 'my_float_parameter': np.arange(42.0, 44.876, 0.23) } )

This will result in a TypeError because your exploration iterable np.arange(42.0, 44.876, 0.23) contains numpy.float64 values whereas you parameter is supposed to use standard python floats.

Yet, you can use Numpys tolist() function to overcome this problem:

traj.f_explore( { 'my_float_parameter': np.arange(42.0, 44.876, 0.23).tolist() } )

Or you could specify your parameter directly as a numpy float:

traj.f_add_parameter('my_float_parameter', np.float64(42.4),
                       comment='My value is a numpy 64 bit float')
f_find_idx(name_list, predicate)[source]

Finds a single run index given a particular condition on parameters.

Parameters:
  • name_list – A list of parameter names the predicate applies to, if you have only a single parameter name you can omit the list brackets.
  • predicate – A lambda predicate for filtering that evaluates to either true or false
Returns:

A generator yielding the matching single run indices

Example:

>>> predicate = lambda param1, param2: param1==4 and param2 in [1.0, 2.0]
>>> iterator = traj.f_find_idx(['groupA.param1', 'groupA.param2'], predicate)
>>> [x for x in iterator]
[0, 2, 17, 36]
f_get_from_runs(name, where='results', use_indices=False, fast_access=False, backwards_search=False, shortcuts=True, max_depth=None, auto_load=False)[source]

Searches for all occurrences of name in each run.

Generates an ordered dictionary with the run names or indices as keys and found items as values.

Example:

>>> traj.f_get_from_runs(self, 'deep.universal_answer', use_indices=True, fast_access=True)
OrderedDict([(0, 42), (1, 42), (2, 'fortytwo), (4, 43)])
Parameters:
  • name – String description of the item(s) to find
  • where – Either ‘results’ (short ‘r’ works, too) or ‘derived_parameters’ (short ‘d’ works, too)
  • use_indices – If True the keys of the resulting dictionary are the run indices (e.g. 0,1,2,3), otherwise the keys are run names (e.g. run_00000000, run_000000001)
  • fast_access – Whether to return parameter or result instances or the values handled by these.
  • backwards_search – If the tree should be searched backwards in case more than one name/location is given. For instance, groupA,groupC,valD can be used for backwards search. The starting group will look for valD first and try to find a way back and check if it passes by groupA and groupC.
  • shortcuts – If shortcuts are allowed and the trajectory can hop over nodes in the path.
  • max_depth – Maximum depth (relative to start node) how search should progress in tree. None means no depth limit. Only relevant if shortcuts are allowed.
  • auto_load – If data should be loaded from the storage service if it cannot be found in the current trajectory tree. Auto-loading will load group and leaf nodes currently not in memory and it will load data into empty leaves. Be aware that auto-loading does not work with shortcuts.
Returns:

Ordered dictionary with run names or indices as keys and found items as values. Will only include runs where an item was actually found.

f_get_run_information(name_or_idx=None, copy=True)[source]

Returns a dictionary containing information about a single run.

The information dictionaries have the following key, value pairings:

  • completed: Boolean, whether a run was completed

  • idx: Index of a run

  • timestamp: Timestamp of the run as a float

  • time: Formatted time string

  • finish_timestamp: Timestamp of the finishing of the run

  • runtime: Total runtime of the run in human readable format

  • name: Name of the run

  • parameter_summary:

    A string summary of the explored parameter settings for the particular run

  • short_environment_hexsha: The short version of the environment SHA-1 code

If no name or idx is given than a nested dictionary with keys as run names and info dictionaries as values is returned.

Parameters:
  • name_or_idx – str or int
  • copy – Whether you want the dictionary used by the trajectory or a copy. Note if you want the real thing, please do not modify it, i.e. popping or adding stuff. This could mess up your whole trajectory.
Returns:

A run information dictionary or a nested dictionary of information dictionaries with the run names as keys.

f_get_run_names(sort=True)[source]

Returns a list of run names.

Parameters:sort – Whether to get them sorted, will only require O(N) [and not O(N*log N)] since we use (sort of) bucket sort.
f_idx_to_run(name_or_idx)[source]

Converts an integer idx to the corresponding single run name and vice versa.

Parameters:name_or_idx – Name of a single run or an integer index
Returns:The corresponding idx or name of the single run

Example usage:

>>> traj.f_idx_to_run(4)
'run_00000004'
>>> traj.f_idx_to_run('run_00000000')
0
f_is_completed(name_or_id=None)[source]

Whether or not a given run is completed.

If no run is specified it is checked whether all runs were completed.

Parameters:name_or_id – Nam or id of a run to check
Returns:True or False
f_is_empty()[source]

Whether no results nor parameters have been added yet to the trajectory (ignores config).

f_iter_runs()[source]

Makes the trajectory iterate over all runs.

Note that after a full iteration, the trajectory is set back to normal.

Thus, the following code snippet

for run_name in traj.f_iter_runs():

     # Do some stuff here...

is equivalent to

for run_name in traj.f_get_run_names(sort=True):
    traj.f_as_run(run_name)

    # Do some stuff here...

traj.f_as_run(None)
Returns:Iterator over runs. The iterator itself will return the run names but modify the trajectory in each iteration and set it back do normal in the end.
f_load(name=None, index=None, as_new=False, load_parameters=2, load_derived_parameters=1, load_results=1, load_other_data=1, load_all=None, force=False, filename=None, dynamically_imported_classes=None)[source]

Loads a trajectory via the storage service.

If you want to load individual results or parameters manually, you can take a look at f_load_items(). To only load subtrees check out f_load_child().

For f_load you can pass the following arguments:

Parameters:
  • name – Name of the trajectory to be loaded. If no name or index is specified the current name of the trajectory is used.
  • index – If you don’t specify a name you can specify an integer index instead. The corresponding trajectory in the hdf5 file at the index position is loaded (counting starts with 0). Negative indices are also allowed counting in reverse order. For instance, -1 refers to the last trajectory in the file, -2 to the second last, and so on.
  • as_new – Whether you want to rerun the experiments. So the trajectory is loaded only with parameters. The current trajectory name is kept in this case, which should be different from the trajectory name specified in the input parameter name. If you load as_new=True all parameters are unlocked. If you load as_new=False the current trajectory is replaced by the one on disk, i.e. name, timestamp, formatted time etc. are all taken from disk.
  • load_parameters – How parameters and config items are loaded
  • load_derived_parameters – How derived parameters are loaded
  • load_results

    How results are loaded

    You can specify how to load the parameters, derived parameters and results as follows:

    pypet.pypetconstants.LOAD_NOTHING: (0)
    Nothing is loaded

    pypet.pypetconstants.LOAD_SKELETON: (1)

    The skeleton including annotations are loaded, i.e. the items are empty. Note that if the items already exist in your trajectory an AttributeError is thrown. If this is the case use -1 instead.

    pypet.pypetconstants.LOAD_DATA: (2)

    The whole data is loaded. Note that if the items already exist in your trajectory an AttributeError is thrown. If this is the case use -2 instead.

    pypet.pypetconstants.UPDATE_SKELETON: (-1)

    The skeleton is updated, i.e. only items that are not currently part of your trajectory are loaded empty.

    pypet.pypetconstants.UPDATE_DATA: (-2) Like (2)

    Only items that are currently not in your trajectory are loaded with data.

    Note that in all cases except pypet.pypetconstants.LOAD_NOTHING, annotations will be reloaded if the corresponding instance is created or the annotations of an existing instance were emptied before.

  • load_all – As the above, per default set to None. If not None the setting of load_all will overwrite the settings of load_parameters, load_derived_parameters, load_results, and load_other_data. This is more or less or shortcut if all types should be loaded the same.
  • force – pypet will refuse to load trajectories that have been created using pypet with a different version number. To force the load of a trajectory from a previous version simply set force = True.
  • filename – If you haven’t specified a filename on creation of the trajectory, you can specify one here. The trajectory will generate an HDF5StorageService automatically.
  • dynamically_imported_classes

    If you’ve written a custom parameter that needs to be loaded dynamically during runtime, this needs to be specified here as a list of classes or strings naming classes and there module paths. For example: dynamically_imported_classes = [‘pypet.parameter.PickleParameter’,MyCustomParameter]

    If you only have a single class to import, you do not need the list brackets: dynamically_imported_classes = ‘pypet.parameter.PickleParameter’

    The classes passed here are added for good and will be kept by the trajectory. Please add your dynamically imported classes only once.

Raises:

Attribute Error:

If options 1 and 2 (load skeleton and load data) are applied but the objects already exist in your trajectory. This prevents implicitly overriding data in RAM. Use -1 and -2 instead to load only items that are currently not in your trajectory in RAM. Or remove the items you want to ‘reload’ first.

f_lock_derived_parameters()[source]

Locks all derived parameters

f_lock_parameters()[source]

Locks all parameters

f_merge(other_trajectory, trial_parameter=None, remove_duplicates=False, ignore_trajectory_derived_parameters=False, ignore_trajectory_results=False, backup_filename=None, move_nodes=False, delete_other_trajectory=False, keep_info=True, keep_other_trajectory_info=True, merge_config=True)[source]

Merges another trajectory into the current trajectory.

Both trajectories must live in the same space. This means both need to have the same parameters with similar types of values.

Parameters:
  • other_trajectory – Other trajectory instance to merge into the current one.
  • trial_parameter – If you have a particular parameter that specifies only the trial number, i.e. an integer parameter running form 0 to T1 and 0 to T2, the parameter is modified such that after merging it will cover the range 0 to T1+T2+1. T1 is the number of individual trials in the current trajectory and T2 number of trials in the other trajectory.
  • remove_duplicates – Whether you want to remove duplicate parameter points. Requires N1 * N2 (quadratic complexity in single runs). A ValueError is raised if no runs would be merged.
  • ignore_trajectory_derived_parameters – Whether you want to ignore or merge derived parameters kept under .derived_parameters.trajectory
  • ignore_trajectory_results – As above but with results. If you have trajectory results with the same name in both trajectories, the result in the current trajectory is kept and the other one is not merged into the current trajectory.
  • backup_filename

    If specified, backs up both trajectories into the given filename.

    You can also choose backup_filename = True, than the trajectories are backed up into two separate files in your data folder and names are automatically chosen as in f_backup().

  • move_nodes – If you use the HDF5 storage service and both trajectories are stored in the same file, merging is performed fast directly within the file. You can choose if you want to copy nodes (‘move_nodes=False`) from the other trajectory to the current one, or if you want to move them. Accordingly, the stored data is no longer accessible in the other trajectory.
  • delete_other_trajectory – If you want to delete the other trajectory after merging. Only possible if you have chosen to move_nodes. Why would do you want to expensively copy data before and than erase it?
  • keep_info – If True, information about the merge is added to the trajectory config tree under config.merge.
  • merge_config – Whether or not to merge all config parameters under .config.git, .config.environment, and .config.merge of the other trajectory into the current one.
  • keep_other_trajectory_info – Whether to keep information like length, name, etc. of the other trajectory in case you want to keep all the information. Setting of keep_other_trajectory_info is irrelevant in case keep_info=False.

If you cannot directly merge trajectories within one HDF5 file, a slow merging process is used. Results are loaded, stored, and emptied again one after the other. Might take some time!

Annotations of parameters and derived parameters under .derived_parameters.trajectory are NOT merged. If you wish to extract the annotations of these parameters you have to do that manually before merging. Note that annotations of results and derived parameters of single runs are copied, so you don’t have to worry about these.

f_migrate(new_name=None, new_filename=None, new_file_tile=None, in_store=False)[source]

Can be called to rename and relocate the trajectory.

Choosing a new filename only works with the original HDF5StorageService. In case the trajectory has no storage service, a new HDF5StorageService is created

Parameters:
  • new_name – New name of the trajectory, None if you do not want to change the name.
  • new_filename – New file_name of the trajectory, None if you do not want to change the filename.
  • in_store – Set this to True if the trajectory has been stored with the new name at the new file before and you just want to “switch back” to the location. If you migrate to a store used before and you do not set in_store=True, the storage service will throw a RuntimeError in case you store the Trajectory because it will assume that you try to store a new trajectory that accidentally has the very same name as another trajectory. If set to True and trajectory is not found in the file, the trajectory is simply stored to the file.
f_preset_config(config_name, *args, **kwargs)[source]

Similar to func:~pypet.trajectory.Trajectory.f_preset_parameter

f_preset_parameter(param_name, *args, **kwargs)[source]

Presets parameter value before a parameter is added.

Can be called before parameters are added to the Trajectory in order to change the values that are stored into the parameter on creation.

After creation of a parameter, the instance of the parameter is called with param.f_set(*args,**kwargs) with *args, and **kwargs provided by the user with f_preset_parameter.

Before an experiment is carried out it is checked if all parameters that were marked were also preset.

Parameters:
  • param_name – The full name (!) of the parameter that is to be changed after its creation.
  • args – Arguments that will be used for changing the parameter’s data
  • kwargs – Keyword arguments that will be used for changing the parameter’s data

Example:

>>> traj.f_preset_parameter('groupA.param1', data=44)
>>> traj.f_add_parameter('groupA.param1', data=11)
>>> traj.parameters.groupA.param1
44
f_restore_default()[source]

Restores the default value in all explored parameters and sets the v_idx property back to -1 and v_as_run to None.

f_shrink()[source]

Shrinks the trajectory and removes all exploration ranges from the parameters. Only possible if the trajectory has not been stored to disk before or was loaded as new.

Raises:TypeError if the trajectory was stored before.
f_store(new_name=None, new_filename=None, only_init=False)[source]

Stores the trajectory to disk.

Parameters:
  • filename – You can give another filename here if you want to store the trajectory somewhere else than in the filename you have specified on trajectory creation. This will change the file for good. Calling f_store again will keep the new file location.
  • new_name – If you want to store the trajectory under a new name. If name is changed, name remains for good and the trajectory keeps the new name.

:param only_init

If you just want to initialise the store. If yes, only meta information about the trajectory is stored and none of the nodes/leaves within the trajectory.

If you use the HDF5 Storage Service only novel data is stored to disk.

If you have results that have been stored to disk before only new data items are added and already present data is NOT overwritten. Overwriting existing data with the HDF5 storage service is currently not supported.

If you want to store individual parameters or results, you might want to take a look at f_store_items(). To store whole subtrees of your trajectory check out f_store_child(). Note both functions require that your trajectory was stored to disk with f_store at least once before.

f_update_skeleton()[source]

Loads the full skeleton from the storage service.

This needs to be done after a successful exploration in order to update the trajectory tree with all results and derived parameters from the individual single runs. This will only add empty results and derived parameters (i.e. the skeleton) and load annotations.

v_as_run[source]

Run name if you want to access the trajectory as a single run.

You can turn the trajectory to behave like a single run object if you set v_as_run to a particular run name. Note that only string values are appropriate here, not indices. Check the v_idx property if you want to provide an index.

Alternatively instead of directly setting v_as_run you can call f_as_run:(). See it’s documentation for a description of making the trajectory behave like a single run.

Set to None to make the trajectory to turn everything back to default.

v_comment[source]

Should be a nice descriptive comment

v_filename[source]

The name and path of the hdf5 file in case you use the HDF4StorageService

v_full_copy[source]

Whether trajectory is copied fully during pickling or only the current parameter space point.

Note if the trajectory is copied as a whole, also the single run objects created by the trajectory can access the full parameter space.

Changing v_full_copy will also change v_full_copy of all explored parameters!

v_idx[source]

Index if you want to access the trajectory as a single run.

You can turn the trajectory to behave like a single run object if you set v_idx to a particular index. Note that only integer values are appropriate here, not names of runs.

Alternatively instead of directly setting v_idx you can call f_as_run:(). See it’s documentation for a description of making the trajectory behave like a single run.

Set to -1 to make the trajectory turn everything back to default.

v_python[source]

The version of python as a string that was used to create the trajectory

v_storage_service[source]

The service that can store the trajectory to disk or wherever.

Default is None or if a filename was provided on construction the rageService.

v_version[source]

The version of pypet that was used to create the trajectory

SingleRun

class pypet.trajectory.SingleRun(name, idx, parent_trajectory)[source]

Constitutes one specific parameter combination in a whole trajectory with parameter exploration.

A SingleRun instance is accessed during the actual run phase of a trajectory (see also Trajectory). There exists a SingleRun object for each point in the parameter space.

Parameters can no longer be added, the parameter set is supposed to be complete before the actual running of the experiment. However, derived parameters can still be added.

The instance of a SingleRun is never instantiated by the user but by the parent trajectory.

f_delete_item(item, *args, **kwargs)[source]

Deletes a single item, see delete_items()

f_delete_items(iterator, *args, **kwargs)[source]

Deletes items from storage on disk.

Per default the item is NOT removed from the trajectory.

Parameters:
  • iterator – A sequence of items you want to remove. Either the instances themselves or strings with the names of the items.
  • remove_empty_groups – If your deletion of the instance leads to empty groups, these will be deleted, too. Default is False.
  • remove_from_trajectory – If items should also be removed from trajectory. Default is False.
  • args – Additional arguments passed to the storage service
  • kwargs

    Additional keyword arguments passed to the storage service

    If you use the standard hdf5 storage service, you can pass the following additional keyword argument:

    param delete_only:
     You can partially delete leaf nodes. Specify a list of parts of the result node that should be deleted like delete_only=[‘mystuff’,’otherstuff’]. This wil only delete the hdf5 sub parts mystuff and otherstuff from disk. BE CAREFUL, erasing data partly happens at your own risk. Depending on how complex the loading process of your result node is, you might not be able to reconstruct any data due to partially deleting some of it.

    Be aware that you need to specify the names of parts as they were stored to HDF5. Depending on how your leaf construction works, this may differ from the names the data might have in your leaf in the trajectory container.

    If the hdf5 nodes you specified in delete_only cannot be found a warning is issued.

    Note that massive deletion will fragment your HDF5 file. Try to avoid changing data on disk whenever you can.

    If you want to erase a full node, simply ignore this argument or set to None.

    param remove_from_item:
     If data that you want to delete from storage should also be removed from the items in iterator if they contain these. Default is False.
f_get_config(fast_access=False, copy=True)[source]

Returns a dictionary containing the full config names as keys and the config parameters or the config parameter data items as values.

Parameters:
  • fast_access – Determines whether the parameter objects or their values are returned in the dictionary.
  • copy – Whether the original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all! Not Copying and fast access do not work at the same time! Raises ValueError if fast access is true and copy false.
Returns:

Dictionary containing the config data

Raises:

ValueError

f_get_derived_parameters(fast_access=False, copy=True)[source]
Returns a dictionary containing the full parameter names as keys and the parameters
or the parameter data items as values.
Parameters:
  • fast_access – Determines whether the parameter objects or their values are returned in the dictionary.
  • copy – Whether the original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all! Not Copying and fast access do not work at the same time! Raises ValueError if fast access is true and copy false.
Returns:

Dictionary containing the parameters.

Raises:

ValueError

f_get_explored_parameters(fast_access=False, copy=True)[source]
Returns a dictionary containing the full parameter names as keys and the parameters
or the parameter data items as values.
Parameters:
  • fast_access – Determines whether the parameter objects or their values are returned in the dictionary.
  • copy – Whether the original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all! Not Copying and fast access do not work at the same time! Raises ValueError if fast access is true and copy false.
Returns:

Dictionary containing the parameters.

Raises:

ValueError

f_get_parameters(fast_access=False, copy=True)[source]
Returns a dictionary containing the full parameter names as keys and the parameters
or the parameter data items as values.
Parameters:
  • fast_access – Determines whether the parameter objects or their values are returned in the dictionary.
  • copy – Whether the original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all! Not Copying and fast access do not work at the same time! Raises ValueError if fast access is true and copy false.
Returns:

Dictionary containing the parameters.

Raises:

ValueError

f_get_results(fast_access=False, copy=True)[source]

Returns a dictionary containing the full result names as keys and the corresponding result objects or result data items as values.

Parameters:
  • fast_access – Determines whether the result objects or their values are returned in the dictionary. Works only for results if they contain a single item with the name of the result.
  • copy – Whether the original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all! Not Copying and fast access do not work at the same time! Raises ValueError if fast access is true and copy false.
Returns:

Dictionary containing the results.

Raises:

ValueError

f_load_item(item, *args, **kwargs)[source]

Loads a single item, see also f_load_items()

f_load_items(iterator, *args, **kwargs)[source]

Loads parameters and results specified in iterator.

You can directly list the Parameter objects or just their names.

If names are given the ~pypet.trajectory.SingleRun.f_get method is applied to find the parameters or results in the trajectory. Accordingly, the parameters and results you want to load must already exist in your trajectory (in RAM), probably they are just empty skeletons waiting desperately to handle data. If they do not exist in RAM yet, but have been stored to disk before, you can call f_update_skeleton() in order to bring your trajectory tree skeleton up to date. In case of a single run you can use the f_load_child() method to recursively load a subtree without any data. Then you can load the data of individual results or parameters one by one.

If want to load the whole trajectory at once or ALL results and parameters that are still empty take a look at f_load(). As mentioned before, to load subtrees of your trajectory you might want to check out f_load_child().

To load a list of parameters or results with f_load_items you can pass the following arguments:

Parameters:
  • iterator – A list with parameters or results to be loaded.
  • only_empties – Optional keyword argument (boolean), if True only empty parameters or results are passed to the storage service to get loaded. Non-empty parameters or results found in iterator are simply ignored.
  • args – Additional arguments directly passed to the storage service
  • kwargs

    Additional keyword arguments directly passed to the storage service (except the kwarg only_empties)

    If you use the standard hdf5 storage service, you can pass the following additional keyword arguments:

    param load_only:
     If you load a result, you can partially load it and ignore the rest of data items. Just specify the name of the data you want to load. You can also provide a list, for example load_only=’spikes’, load_only=[‘spikes’,’membrane_potential’].

    Be aware that you need to specify the names of parts as they were stored to HDF5. Depending on how your leaf construction works, this may differ from the names the data might have in your leaf in the trajectory container.

    A warning is issued if data specified in load_only cannot be found in the instances specified in iterator.

    param load_except:
     Analogous to the above, but everything is loaded except names or parts specified in load_except. You cannot use load_only and load_except at the same time. If you do a ValueError is thrown.

    A warning is issued if names listed in load_except are not part of the items to load.

f_remove_item(item, remove_empty_groups=False)[source]

Removes a single item, see remove_items()

f_remove_items(iterator, remove_empty_groups=False)[source]

Removes parameters, results or groups from the trajectory.

This function ONLY removes items from your current trajectory and does not delete data stored to disk. If you want to delete data from disk, take a look at f_delete_items().

Parameters:
  • iterator – A sequence of items you want to remove. Either the instances themselves or strings with the names of the items.
  • remove_empty_groups – If your deletion of the instance leads to empty groups, these will be deleted, too.
f_store()[source]

Stores all data from the single run to disk.

Looks for new data that is added below a group called run_XXXXXXXXXX and stores it where XXXXXXXXX is the index of this run.

f_store_item(item, *args, **kwargs)[source]

Stores a single item, see also f_store_items().

f_store_items(iterator, *args, **kwargs)[source]

Stores individual items to disk.

This function is useful if you calculated very large results (or large derived parameters) during runtime and you want to write these to disk immediately and empty them afterwards to free some memory.

Instead of storing individual parameters or results you can also store whole subtrees with f_store_child().

You can pass the following arguments to f_store_items:

Parameters:
  • iterator – An iterable containing the parameters or results to store, either their names or the instances. You can also pass group instances or names here to store the annotations of the groups.
  • non_empties – Optional keyword argument (boolean), if True will only store the subset of provided items that are not empty. Empty parameters or results found in iterator are simply ignored.
  • args – Additional arguments passed to the storage service
  • kwargs

    If you use the standard hdf5 storage service, you can pass the following additional keyword argument:

    param overwrite:
     List names of parts of your item that should be erased and overwritten by the new data in your leaf. You can also set overwrite=True to overwrite all parts.

    For instance:

    >>> traj.f_add_result('mygroup.myresult', partA=42, partB=44, partC=46)
    >>> traj.f_store()
    >>> traj.mygroup.myresult.partA = 333
    >>> traj.mygroup.myresult.partB = 'I am going to change to a string'
    >>> traj.f_store_item('mygroup.myresult', overwrite=['partA', 'partB'])
    

    Will store ‘mygroup.myresult’ to disk again and overwrite the parts ‘partA’ and ‘partB’ with the new values 333 and ‘I am going to change to a string’. The data stored as partC is not changed.

    Be aware that you need to specify the names of parts as they were stored to HDF5. Depending on how your leaf construction works, this may differ from the names the data might have in your leaf in the trajectory container.

    Note that massive overwriting will fragment and blow up your HDF5 file. Try to avoid changing data on disk whenever you can.

Raises:

TypeError:

If the (parent) trajectory has never been stored to disk. In this case use pypet.trajectory.f_store() first.

ValueError: If no item could be found to be stored.

Note if you use the standard hdf5 storage service, there are no additional arguments or keyword arguments to pass!

f_to_dict(fast_access=False, short_names=False, copy=True)[source]

Returns a dictionary with pairings of (full) names as keys and instances/values.

Parameters:
  • fast_access – If True, parameter values are returned instead of the instances. Works also for results if they contain a single item with the name of the result.
  • short_names – If true, keys are not full names but only the names. Raises a ValueError if the names are not unique.
  • copy – If fast_access=False and short_names=False you can access the original data dictionary if you set copy=False. If you do that, please do not modify anything! Raises ValueError if copy=False and fast_access=True or short_names=True.
Returns:

dictionary

Raises:

ValueError

v_as_run[source]

In case of a single run, this returns simply the name of the run.

v_auto_load[source]

Whether the trajectory should attempt to load data on the fly.

Whether to apply backwards search in the tree if one searches a branch with the square brackets notation [].

v_environment_hexsha[source]

If the trajectory is used with an environment this returns the SHA-1 code of the environment.

v_environment_name[source]

If the trajectory is used with an environment this returns the name of the environment.

v_fast_access[source]

Whether parameter instances (False) or their values (True) are returned via natural naming.

Works also for results if they contain a single item with the name of the result.

Default is True.

v_filename[source]

The name and path of the hdf5 file in case you use the HDF5StorageService

v_idx[source]

Index of the single run

v_iter_recursive[source]

Whether using __iter__ should iterate only immediate children or recursively all nodes.

v_max_depth[source]

The maximum depth the tree should be searched if shortcuts are allowed.

Set to None if there should be no depth limit.

v_shortcuts[source]

Whether shortcuts are allowed if accessing data via natural naming or squared bracket indexing.

v_standard_leaf[source]

The standard constructor used if you add a generic leaf.

The constructor is only used if you do not add items under the usual four subtrees (parameters, derived_parameters, config, results).

v_standard_parameter[source]

The standard parameter used for parameter creation

v_standard_result[source]

The standard result class used for result creation

v_storage_service[source]

The service which is used to store a trajectory to disk

v_time[source]

Formatted time string of the time the trajectory or run was created.

v_timestamp[source]

Float timestamp of creation time

v_trajectory_name[source]

Name of the (parent) trajectory

v_trajectory_time[source]

Time (parent) trajectory was created

v_trajectory_timestamp[source]

Float timestamp when (parent) trajectory was created

NNGroupNode

class pypet.naturalnaming.NNGroupNode(nn_interface=None, full_name='', comment='')[source]

A group node hanging somewhere under the trajectory or single run root node.

You can add other groups or parameters/results to it.

f_add_group(name, comment='')[source]

Adds an empty generic group under the current node.

You can add to a generic group anywhere you want. So you are free to build your parameter tree with any structure. You do not necessarily have to follow the four subtrees config, parameters, derived_parameters, results.

If you are operating within these subtrees this simply calls the corresponding adding function.

Be aware that if you are within a single run and you add items not below a group run_XXXXXXXX that you have to manually save the items. Otherwise they will be lost after the single run is completed.

f_add_leaf(*args, **kwargs)[source]

Adds an empty generic leaf under the current node.

You can add to a generic leaves anywhere you want. So you are free to build your trajectory tree with any structure. You do not necessarily have to follow the four subtrees config, parameters, derived_parameters, results.

If you are operating within these subtrees this simply calls the corresponding adding function.

Be aware that if you are within a single run and you add items not below a group run_XXXXXXXX that you have to manually save the items. Otherwise they will be lost after the single run is completed.

f_ann_to_str()

Returns annotations as string

Equivalent to v_annotations.f_ann_to_str()

f_ann_to_string(*args, **kwargs)

Returns annotations as string

Equivalent to v_annotations.f_ann_to_str()

DEPRECATED: Please use f_ann_to_str() instead.

f_children()[source]

Returns the number of children of the group

f_contains(item, backwards_search=False, shortcuts=False, max_depth=None)[source]

Checks if the node contains a specific parameter or result.

It is checked if the item can be found via the f_get() method.

Parameters:
  • item

    Parameter/Result name or instance.

    If a parameter or result instance is supplied it is also checked if the provided item and the found item are exactly the same instance, i.e. id(item)==id(found_item).

  • backwards_search – If backwards search should be allowed in case the name contains grouping.
  • shortcuts – Shortcuts is False the name you supply must be found in the tree WITHOUT hopping over nodes in between. If shortcuts=False and you supply a non colon separated (short) name, than the name must be found in the immediate children of your current node. Otherwise searching via shortcuts is allowed.
  • max_depth – If shortcuts is True than the maximum search depth can be specified. None means no limit.
Returns:

True or False

f_get(name, fast_access=False, backwards_search=False, shortcuts=True, max_depth=None, auto_load=False)[source]

Searches and returns an item (parameter/result/group node) with the given name.

Parameters:
  • name – Name of the item (full name or parts of the full name)
  • fast_access – Whether fast access should be applied.
  • backwards_search – If the tree should be searched backwards in case more than one name/location is given. For instance, groupA,groupC,valD can be used for backwards search. The starting group will look for valD first and try to find a way back and check if it passes by groupA and groupC.
  • shortcuts – If shortcuts are allowed and the trajectory can hop over nodes in the path.
  • max_depth – Maximum depth (relative to start node) how search should progress in tree. None means no depth limit. Only relevant if shortcuts are allowed.
  • auto_load – If data should be loaded from the storage service if it cannot be found in the current trajectory tree. Auto-loading will load group and leaf nodes currently not in memory and it will load data into empty leaves. Be aware that auto-loading does not work with shortcuts.
Returns:

The found instance (result/parameter/group node) or if fast access is True and you found a parameter or result that supports fast access, the contained value is returned.

Raises:

AttributeError: If no node with the given name can be found

NotUniqueNodeError

In case of forward search if more than one candidate node is found within a particular depth of the tree. In case of backwards search if more than one candidate is found regardless of the depth.

Any exception raised by the StorageService in case auto-loading is enabled

f_get_all(name, max_depth=None)[source]

Searches for all occurrences of name under node.

Parameters:
  • node – Start node
  • name – Name of what to look for, can be separated by colons, i.e. mygroupA.mygroupB.myparam.
  • max_depth – Maximum search depth relative to start node. None for no limit.
Returns:

List of nodes that match the name, empty list if nothing was found.

f_get_annotations(*args)

Returns annotations

Equivalent to v_annotations.f_get(*args)

f_get_children(copy=True)[source]

Returns a children dictionary.

Parameters:copy – Whether the group’s original dictionary or a shallow copy is returned. If you want the real dictionary please do not modify it at all!
Returns:Dictionary of nodes
f_get_class_name()

Returns the class name of the parameter or result or group.

Equivalent to obj.__class__.__name__

f_get_root()[source]

Returns the root node of the tree.

Either a full trajectory or a single run container.

f_has_children()[source]

Checks if node has children or not

f_is_root(*args, **kwargs)

Whether the group is root (True for the trajectory and a single run object)

DEPRECATED: Please use property v_is_root!

f_iter_leaves()[source]

Iterates (recursively) over all leaves hanging below the current group.

f_iter_nodes(recursive=True)[source]

Iterates recursively (default) over nodes hanging below this group.

Parameters:recursive – Whether to iterate the whole sub tree or only immediate children.
Returns:Iterator over nodes
f_load_child(name, recursive=False, load_data=2)[source]

Loads a child or recursively a subtree from disk.

Parameters:
  • name – Name of child to load. If grouped (‘groupA.groupB.childC’) the path along the way to last node in the chain is loaded. Shortcuts are NOT allowed!
  • recursive – Whether recursively all nodes below the last child should be loaded, too.
  • load_data – Flag how to load the data. For how to choose ‘load_data’ see Loading.
Returns:

The loaded child, in case of grouping (‘groupA.groupB.childC’) the last node (here ‘childC’) is returned.

f_remove_child(name, recursive=False)[source]

Removes a child of the group.

Note that groups and leaves are only removed from the current trajectory in RAM. If the trajectory is stored to disk, this data is not affected. Thus, removing children can be only be used to free RAM memory!

If you want to free memory on disk via your storage service, use f_delete_items() of your trajectory.

Parameters:
  • name – Name of child, naming by grouping is NOT allowed (‘groupA.groupB.childC’), child must be direct successor of current node.
  • recursive – Must be true if child is a group that has children. Will remove the whole subtree in this case. Otherwise a Type Error is thrown.
Raises:

TypeError if recursive is false but there are children below the node.

ValueError if child does not exist.

f_set_annotations(*args, **kwargs)

Sets annotations

Equivalent to calling v_annotations.f_set(*args,**kwargs)

f_store_child(name, recursive=False)[source]

Stores a child or recursively a subtree to disk.

Parameters:
  • name – Name of child to store. If grouped (‘groupA.groupB.childC’) the path along the way to last node in the chain is stored. Shortcuts are NOT allowed!
  • recursive – Whether recursively all children’s children should be stored too.
Raises:

ValueError if the child does not exist.

f_to_dict(fast_access=False, short_names=False)[source]

Returns a dictionary with pairings of (full) names as keys and instances as values.

Parameters:
  • fast_access – If True parameter or result values are returned instead of the instances.
  • short_names – If true keys are not full names but only the names. Raises a ValueError if the names are not unique.
Returns:

dictionary

Raises:

ValueError

v_annotations

Annotation feature of a trajectory node.

Store some short additional information about your nodes here. If you use the standard HDF5 storage service, they will be stored as hdf5 node attributes.

v_branch

The name of the branch/subtree, i.e. the first node below the root.

The empty string in case of root itself.

v_comment

Should be a nice descriptive comment

v_depth

Depth of the node in the trajectory tree.

v_full_name

The full name, relative to the root node.

The full name of a trajectory or single run is the empty string since it is root.

v_is_leaf

Whether node is a leaf or not (i.e. it is a group node)

v_is_root

Whether the group is root (True for the trajectory and a single run object)

v_leaf

Whether node is a leaf or not (i.e. it is a group node)

DEPRECATED: Please use v_is_leaf!

v_location

Location relative to the root node.

The location of a trajectory or single run is the empty string since it is root.

v_name

Name of the node

v_run_branch

If this node is hanging below a branch named run_XXXXXXXXX.

The branch name is either the name of a single run (e.g. ‘run_00000009’) or ‘trajectory’.

v_stored

Whether or not this tree node has been stored to disk before.

ParameterGroup

class pypet.naturalnaming.ParameterGroup(nn_interface=None, full_name='', comment='')[source]

Group node in your trajectory, hanging below traj.parameters.

You can add other groups or parameters to it.

f_add_parameter(*args, **kwargs)[source]

Adds a parameter under the current node.

There are two ways to add a new parameter either by adding a parameter instance:

>>> new_parameter = Parameter('group1.group2.myparam', data=42, comment='Example!')
>>> traj.f_add_parameter(new_parameter)

Or by passing the values directly to the function, with the name being the first (non-keyword!) argument:

>>> traj.f_add_parameter('group1.group2.myparam', data=42, comment='Example!')

If you want to create a different parameter than the standard parameter, you can give the constructor as the first (non-keyword!) argument followed by the name (non-keyword!):

>>> traj.f_add_parameter(PickleParameter,'group1.group2.myparam', data=42, comment='Example!')

The full name of the current node is added as a prefix to the given parameter name. If the current node is the trajectory the prefix ‘parameters’ is added to the name.

f_add_parameter_group(name, comment='')[source]

Adds an empty parameter group under the current node.

Adds the full name of the current node as prefix to the name of the group. If current node is the trajectory (root), the prefix ‘parameters’ is added to the full name.

The name can also contain subgroups separated via colons, for example: name=subgroup1.subgroup2.subgroup3. These other parent groups will be automatically created.

ConfigGroup

class pypet.naturalnaming.ConfigGroup(nn_interface=None, full_name='', comment='')[source]

Group node in your trajectory, hanging below traj.config.

You can add other groups or parameters to it.

f_add_config(*args, **kwargs)[source]

Adds a config parameter under the current group.

Similar to f_add_parameter().

If current group is the trajectory the prefix ‘config’ is added to the name.

f_add_config_group(name, comment='')[source]

Adds an empty config group under the current node.

Adds the full name of the current node as prefix to the name of the group. If current node is the trajectory (root), the prefix ‘config’ is added to the full name.

The name can also contain subgroups separated via colons, for example: name=subgroup1.subgroup2.subgroup3. These other parent groups will be automatically be created.

DerivedParameterGroup

class pypet.naturalnaming.DerivedParameterGroup(nn_interface=None, full_name='', comment='')[source]

Group node in your trajectory, hanging below traj.derived_parameters.

You can add other groups or parameters to it.

f_add_derived_parameter(*args, **kwargs)[source]

Adds a derived parameter under the current group.

Similar to f_add_parameter()

Naming prefixes are added as in f_add_derived_parameter_group()

f_add_derived_parameter_group(name, comment='')[source]

Adds an empty derived parameter group under the current node.

Adds the full name of the current node as prefix to the name of the group. If current node is a single run (root) adds the prefix ‘derived_parameters.runs.run_08%d%’ to the full name where ‘08%d’ is replaced by the index of the current run.

The name can also contain subgroups separated via colons, for example: name=subgroup1.subgroup2.subgroup3. These other parent groups will be automatically be created.

ResultGroup

class pypet.naturalnaming.ResultGroup(nn_interface=None, full_name='', comment='')[source]

Group node in your trajectory, hanging below traj.results.

You can add other groups or results to it.

f_add_result(*args, **kwargs)[source]

Adds a result under the current node.

There are two ways to add a new result either by adding a result instance:

>>> new_result = Result('group1.group2.myresult', 1666, x=3, y=4, comment='Example!')
>>> traj.f_add_result(new_result)

Or by passing the values directly to the function, with the name being the first (non-keyword!) argument:

>>> traj.f_add_result('group1.group2.myresult', 1666, x=3, y=3,comment='Example!')

If you want to create a different result than the standard result, you can give the constructor as the first (non-keyword!) argument followed by the name (non-keyword!):

>>> traj.f_add_result(PickleResult,'group1.group2.myresult', 1666, x=3, y=3, comment='Example!')

Additional arguments (here 1666) or keyword arguments (here x=3, y=3) are passed onto the constructor of the result.

Adds the full name of the current node as prefix to the name of the result. If current node is a single run (root) adds the prefix ‘results.runs.run_08%d%’ to the full name where ‘08%d’ is replaced by the index of the current run.

f_add_result_group(name, comment='')[source]

Adds an empty result group under the current node.

Adds the full name of the current node as prefix to the name of the group. If current node is a single run (root) adds the prefix ‘results.runs.run_08%d%’ to the full name where ‘08%d’ is replaced by the index of the current run.

The name can also contain subgroups separated via colons, for example: name=subgroup1.subgroup2.subgroup3. These other parent groups will be automatically be created.