cotengra.hyperoptimizers.hyper
#
Base hyper optimization functionality.
Module Contents#
Classes#
The final score wrapper, that performs some simple arithmetic on the |
|
A path optimizer that samples a series of contraction trees |
|
Like |
|
A compressed contraction path optimizer that samples a series of ordered |
|
Like |
|
A path optimizer that samples a series of contraction trees |
Functions#
|
|
|
Register a contraction path finder to be used by the hyper-optimizer. |
Return a list of currently registered hyper contraction finders. |
|
|
|
|
|
|
|
Make |
|
|
|
|
|
|
Compute a hash for a particular contraction geometry. |
Attributes#
- cotengra.hyperoptimizers.hyper._PATH_FNS#
- cotengra.hyperoptimizers.hyper._OPTLIB_FNS#
- cotengra.hyperoptimizers.hyper._HYPER_SEARCH_SPACE#
- cotengra.hyperoptimizers.hyper._HYPER_CONSTANTS#
- cotengra.hyperoptimizers.hyper.register_hyper_optlib(name, init_optimizers, get_setting, report_result)[source]#
- cotengra.hyperoptimizers.hyper.register_hyper_function(name, ssa_func, space, constants=None)[source]#
Register a contraction path finder to be used by the hyper-optimizer.
- Parameters:
name (str) – The name to call the method.
ssa_func (callable) – The raw function that returns a ‘ContractionTree’, with signature
(inputs, output, size_dict, **kwargs)
.space (dict[str, dict]) – The space of hyper-parameters to search.
- cotengra.hyperoptimizers.hyper.list_hyper_functions()[source]#
Return a list of currently registered hyper contraction finders.
- class cotengra.hyperoptimizers.hyper.ReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]#
- class cotengra.hyperoptimizers.hyper.SlicedReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]#
- class cotengra.hyperoptimizers.hyper.ComputeScore(fn, score_fn, score_compression=0.75, score_smudge=1e-06, on_trial_error='warn', seed=0)[source]#
The final score wrapper, that performs some simple arithmetic on the trial score to make it more suitable for hyper-optimization.
- class cotengra.hyperoptimizers.hyper.HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]#
Bases:
cotengra.oe.PathOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- property minimize#
- property parallel#
- property tree#
- property path#
- compressed = False#
- multicontraction = False#
- search(inputs, output, size_dict)[source]#
Run this optimizer and return the
ContractionTree
for the best path it finds.
- cotengra.hyperoptimizers.hyper.make_hashable(x)[source]#
Make
x
hashable by recursively turning list into tuples and dicts into sorted tuples of key-value pairs.
- cotengra.hyperoptimizers.hyper.hash_contraction(inputs, output, size_dict, method='a')[source]#
Compute a hash for a particular contraction geometry.
- class cotengra.hyperoptimizers.hyper.ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]#
Bases:
cotengra.oe.PathOptimizer
Like
HyperOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found.- Parameters:
directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths with deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperOptimizer
.
- property last_opt#
- set_surface_order = False#
- get_path_relevant_opts()[source]#
Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified.
- auto_hash_path_relevant_opts()[source]#
Automatically hash the path relevant options used to create the optimizer.
- hash_query(inputs, output, size_dict)[source]#
Hash the contraction specification, returning this and whether the contraction is already present as a tuple.
- class cotengra.hyperoptimizers.hyper.HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]#
Bases:
HyperOptimizer
A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = True#
- multicontraction = False#
- class cotengra.hyperoptimizers.hyper.ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]#
Bases:
ReusableHyperOptimizer
Like
HyperCompressedOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found.- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths with deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperCompressedOptimizer
.
- set_surface_order = True#
- class cotengra.hyperoptimizers.hyper.HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]#
Bases:
HyperOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = False#
- multicontraction = True#