cotengra.hyperoptimizers.hyper¶
Base hyper optimization functionality.
Attributes¶
Classes¶
The final score wrapper, that performs some simple arithmetic on the |
|
A path optimizer that samples a series of contraction trees |
|
Mixin class for optimizers that can be reused, caching the paths |
|
Like |
|
A compressed contraction path optimizer that samples a series of ordered |
|
Like |
|
A path optimizer that samples a series of contraction trees |
Functions¶
Get the default optimizer favoring speed. |
|
Get the default optimizer balancing quality and speed. |
|
|
|
|
Register a contraction path finder to be used by the hyper-optimizer. |
Return a list of currently registered hyper contraction finders. |
|
|
|
|
|
|
|
Make |
|
|
|
|
|
|
Compute a hash for a particular contraction geometry. |
Module Contents¶
- cotengra.hyperoptimizers.hyper.get_default_optlib_eco()[source]¶
Get the default optimizer favoring speed.
- cotengra.hyperoptimizers.hyper.get_default_optlib()[source]¶
Get the default optimizer balancing quality and speed.
- cotengra.hyperoptimizers.hyper._PATH_FNS¶
- cotengra.hyperoptimizers.hyper._OPTLIB_FNS¶
- cotengra.hyperoptimizers.hyper._HYPER_SEARCH_SPACE¶
- cotengra.hyperoptimizers.hyper._HYPER_CONSTANTS¶
- cotengra.hyperoptimizers.hyper.register_hyper_optlib(name, init_optimizers, get_setting, report_result)[source]¶
- cotengra.hyperoptimizers.hyper.register_hyper_function(name, ssa_func, space, constants=None)[source]¶
Register a contraction path finder to be used by the hyper-optimizer.
- Parameters:
name (str) – The name to call the method.
ssa_func (callable) – The raw function that returns a ‘ContractionTree’, with signature
(inputs, output, size_dict, **kwargs)
.space (dict[str, dict]) – The space of hyper-parameters to search.
- cotengra.hyperoptimizers.hyper.list_hyper_functions()[source]¶
Return a list of currently registered hyper contraction finders.
- class cotengra.hyperoptimizers.hyper.TrialSetObjective(trial_fn, objective)[source]¶
- trial_fn¶
- objective¶
- class cotengra.hyperoptimizers.hyper.TrialTreeMulti(trial_fn, varmults, numconfigs)[source]¶
- trial_fn¶
- varmults¶
- numconfigs¶
- class cotengra.hyperoptimizers.hyper.SimulatedAnnealingTrialFn(trial_fn, **opts)[source]¶
- trial_fn¶
- opts¶
- class cotengra.hyperoptimizers.hyper.ReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]¶
- trial_fn¶
- forested = False¶
- parallel = False¶
- opts¶
- class cotengra.hyperoptimizers.hyper.SlicedReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]¶
- trial_fn¶
- forested = False¶
- parallel = False¶
- opts¶
- class cotengra.hyperoptimizers.hyper.CompressedReconfTrial(trial_fn, minimize=None, **opts)[source]¶
- trial_fn¶
- minimize = None¶
- opts¶
- class cotengra.hyperoptimizers.hyper.ComputeScore(fn, score_fn, score_compression=0.75, score_smudge=1e-06, on_trial_error='warn', seed=0)[source]¶
The final score wrapper, that performs some simple arithmetic on the trial score to make it more suitable for hyper-optimization.
- fn¶
- score_fn¶
- score_compression = 0.75¶
- score_smudge = 1e-06¶
- on_trial_error = 'warn'¶
- rng¶
- class cotengra.hyperoptimizers.hyper.HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]¶
Bases:
cotengra.oe.PathOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = False¶
- multicontraction = False¶
- max_repeats = 128¶
- _repeats_start = 0¶
- max_time = None¶
- property parallel¶
- method_choices = []¶
- param_choices = []¶
- scores = []¶
- times = []¶
- costs_flops = []¶
- costs_write = []¶
- costs_size = []¶
- property minimize¶
- score_compression = 0.75¶
- on_trial_error = 'warn'¶
- best_score¶
- max_training_steps = None¶
- best¶
- trials_since_best = 0¶
- simulated_annealing_opts = None¶
- slicing_opts = None¶
- reconf_opts = None¶
- slicing_reconf_opts = None¶
- progbar = False¶
- _optimizer¶
- property tree¶
- property path¶
- search(inputs, output, size_dict)[source]¶
Run this optimizer and return the
ContractionTree
for the best path it finds.
- __call__(inputs, output, size_dict, memory_limit=None)[source]¶
opt_einsum
interface, returns directpath
.
- cotengra.hyperoptimizers.hyper.make_hashable(x)[source]¶
Make
x
hashable by recursively turning list into tuples and dicts into sorted tuples of key-value pairs.
- cotengra.hyperoptimizers.hyper.hash_contraction(inputs, output, size_dict, method='a')[source]¶
Compute a hash for a particular contraction geometry.
- class cotengra.hyperoptimizers.hyper.ReusableOptmizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]¶
Bases:
cotengra.oe.PathOptimizer
Mixin class for optimizers that can be reused, caching the paths and other relevant information for reconstructing the full tree.
- _suboptimizers¶
- _suboptimizer_kwargs¶
- _cache¶
- overwrite = False¶
- _hash_method = 'a'¶
- cache_only = False¶
- property last_opt¶
- abstract get_path_relevant_opts()[source]¶
We only want to hash on options that affect the contraction, not things like progbar.
- class cotengra.hyperoptimizers.hyper.ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]¶
Bases:
ReusableOptmizer
Like
HyperOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found.- Parameters:
directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperOptimizer
.
- set_surface_order = False¶
- get_path_relevant_opts()[source]¶
Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified.
- property minimize¶
- update_from_tree(tree, overwrite='improved')[source]¶
Explicitly add the contraction that
tree
represents into the cache. For example, if you have manually improved it via reconfing. Ifoverwrite=False
and the contracton is present already then do nothing. Ifoverwrite='improved'
then only overwrite if the new path is better. Ifoverwrite=True
then always overwrite.- Parameters:
tree (ContractionTree) – The tree to add to the cache.
overwrite (bool or "improved", optional) – If
True
always overwrite, ifFalse
only overwrite if the contraction is missing, if'improved'
only overwrite if the new path is better (the default). Note that the comparison of scores is based on default objective of the tree.
- class cotengra.hyperoptimizers.hyper.HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]¶
Bases:
HyperOptimizer
A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = True¶
- multicontraction = False¶
- class cotengra.hyperoptimizers.hyper.ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]¶
Bases:
ReusableHyperOptimizer
Like
HyperCompressedOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found.- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperCompressedOptimizer
.
- set_surface_order = True¶
- class cotengra.hyperoptimizers.hyper.HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]¶
Bases:
HyperOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = False¶
- multicontraction = True¶