cotengra.hyperoptimizers.hyper

Base hyper optimization functionality.

Attributes

Classes

TrialSetObjective

TrialConvertTree

TrialTreeMulti

SlicedTrialFn

SimulatedAnnealingTrialFn

ReconfTrialFn

SlicedReconfTrialFn

CompressedReconfTrial

ComputeScore

The final score wrapper, that performs some simple arithmetic on the

HyperOptimizer

A path optimizer that samples a series of contraction trees

ReusableOptmizer

Mixin class for optimizers that can be reused, caching the paths

ReusableHyperOptimizer

Like HyperOptimizer but it will re-instantiate the optimizer

HyperCompressedOptimizer

A compressed contraction path optimizer that samples a series of ordered

ReusableHyperCompressedOptimizer

Like HyperCompressedOptimizer but it will re-instantiate the

HyperMultiOptimizer

A path optimizer that samples a series of contraction trees

Functions

get_default_hq_methods()

get_default_optlib_eco()

Get the default optimizer favoring speed.

get_default_optlib()

Get the default optimizer balancing quality and speed.

get_hyper_space()

get_hyper_constants()

register_hyper_optlib(name, init_optimizers, ...)

register_hyper_function(name, ssa_func, space[, constants])

Register a contraction path finder to be used by the hyper-optimizer.

list_hyper_functions()

Return a list of currently registered hyper contraction finders.

base_trial_fn(inputs, output, size_dict, method, **kwargs)

progress_description(best[, info])

sortedtuple(x)

make_hashable(x)

Make x hashable by recursively turning list into tuples and dicts

hash_contraction_a(inputs, output, size_dict)

hash_contraction_b(inputs, output, size_dict)

hash_contraction(inputs, output, size_dict[, method])

Compute a hash for a particular contraction geometry.

Module Contents

cotengra.hyperoptimizers.hyper.get_default_hq_methods()[source]
cotengra.hyperoptimizers.hyper.get_default_optlib_eco()[source]

Get the default optimizer favoring speed.

cotengra.hyperoptimizers.hyper.get_default_optlib()[source]

Get the default optimizer balancing quality and speed.

cotengra.hyperoptimizers.hyper._PATH_FNS
cotengra.hyperoptimizers.hyper._OPTLIB_FNS
cotengra.hyperoptimizers.hyper._HYPER_SEARCH_SPACE
cotengra.hyperoptimizers.hyper._HYPER_CONSTANTS
cotengra.hyperoptimizers.hyper.get_hyper_space()[source]
cotengra.hyperoptimizers.hyper.get_hyper_constants()[source]
cotengra.hyperoptimizers.hyper.register_hyper_optlib(name, init_optimizers, get_setting, report_result)[source]
cotengra.hyperoptimizers.hyper.register_hyper_function(name, ssa_func, space, constants=None)[source]

Register a contraction path finder to be used by the hyper-optimizer.

Parameters:
  • name (str) – The name to call the method.

  • ssa_func (callable) – The raw function that returns a ‘ContractionTree’, with signature (inputs, output, size_dict, **kwargs).

  • space (dict[str, dict]) – The space of hyper-parameters to search.

cotengra.hyperoptimizers.hyper.list_hyper_functions()[source]

Return a list of currently registered hyper contraction finders.

cotengra.hyperoptimizers.hyper.base_trial_fn(inputs, output, size_dict, method, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.TrialSetObjective(trial_fn, objective)[source]
trial_fn
objective
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.TrialConvertTree(trial_fn, cls)[source]
trial_fn
cls
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.TrialTreeMulti(trial_fn, varmults, numconfigs)[source]
trial_fn
varmults
numconfigs
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.SlicedTrialFn(trial_fn, **opts)[source]
trial_fn
opts
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.SimulatedAnnealingTrialFn(trial_fn, **opts)[source]
trial_fn
opts
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.ReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]
trial_fn
forested = False
parallel = False
opts
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.SlicedReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)[source]
trial_fn
forested = False
parallel = False
opts
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.CompressedReconfTrial(trial_fn, minimize=None, **opts)[source]
trial_fn
minimize = None
opts
__call__(*args, **kwargs)[source]
class cotengra.hyperoptimizers.hyper.ComputeScore(fn, score_fn, score_compression=0.75, score_smudge=1e-06, on_trial_error='warn', seed=0)[source]

The final score wrapper, that performs some simple arithmetic on the trial score to make it more suitable for hyper-optimization.

fn
score_fn
score_compression = 0.75
score_smudge = 1e-06
on_trial_error = 'warn'
rng
__call__(*args, **kwargs)[source]
cotengra.hyperoptimizers.hyper.progress_description(best, info='concise')[source]
class cotengra.hyperoptimizers.hyper.HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]

Bases: cotengra.oe.PathOptimizer

A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.

Parameters:
  • methods (None or sequence[str] or str, optional) – Which method(s) to use from list_hyper_functions().

  • minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a trial dict as its argument and return a single float.

  • max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.

  • max_time (None or float, optional) – The maximum amount of time to run for. Use None for no limit. You can also set an estimated execution ‘rate’ here like 'rate:1e9' that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.

  • parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.

  • slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.

  • slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.

  • reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.

  • optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.

  • space (dict, optional) – The hyper space to search, see get_hyper_space for the default.

  • score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.

  • on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If 'warn' (default), a warning will be printed and the trial will be given a score of inf. If 'raise' the error will be raised. If 'ignore' the trial will be given a score of inf silently.

  • max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).

  • progbar (bool, optional) – Show live progress of the best contraction found so far.

  • optlib_opts – Supplied to the hyper-optimizer library initialization.

compressed = False
multicontraction = False
max_repeats = 128
_repeats_start = 0
max_time = None
property parallel
method_choices = []
param_choices = []
scores = []
times = []
costs_flops = []
costs_write = []
costs_size = []
property minimize
score_compression = 0.75
on_trial_error = 'warn'
best_score
max_training_steps = None
best
trials_since_best = 0
simulated_annealing_opts = None
slicing_opts = None
reconf_opts = None
slicing_reconf_opts = None
progbar = False
_optimizer
property tree
property path
setup(inputs, output, size_dict)[source]
_maybe_cancel_futures()[source]
_maybe_report_result(setting, trial)[source]
_gen_results(repeats, trial_fn, trial_args)[source]
_get_and_report_next_future()[source]
_gen_results_parallel(repeats, trial_fn, trial_args)[source]
search(inputs, output, size_dict)[source]

Run this optimizer and return the ContractionTree for the best path it finds.

get_tree()[source]

Return the ContractionTree for the best path found.

__call__(inputs, output, size_dict, memory_limit=None)[source]

opt_einsum interface, returns direct path.

get_trials(sort=None)[source]
print_trials(sort=None)[source]
to_df()[source]

Create a single pandas.DataFrame with all trials and scores.

to_dfs_parametrized()[source]

Create a pandas.DataFrame for each method, with all parameters and scores for each trial.

plot_trials[source]
plot_trials_alt[source]
plot_scatter[source]
plot_scatter_alt[source]
plot_parameters_parallel[source]
cotengra.hyperoptimizers.hyper.sortedtuple(x)[source]
cotengra.hyperoptimizers.hyper.make_hashable(x)[source]

Make x hashable by recursively turning list into tuples and dicts into sorted tuples of key-value pairs.

cotengra.hyperoptimizers.hyper.hash_contraction_a(inputs, output, size_dict)[source]
cotengra.hyperoptimizers.hyper.hash_contraction_b(inputs, output, size_dict)[source]
cotengra.hyperoptimizers.hyper.hash_contraction(inputs, output, size_dict, method='a')[source]

Compute a hash for a particular contraction geometry.

class cotengra.hyperoptimizers.hyper.ReusableOptmizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]

Bases: cotengra.oe.PathOptimizer

Mixin class for optimizers that can be reused, caching the paths and other relevant information for reconstructing the full tree.

_suboptimizers
_suboptimizer_kwargs
_cache
overwrite = False
_hash_method = 'a'
cache_only = False
property last_opt
abstract get_path_relevant_opts()[source]

We only want to hash on options that affect the contraction, not things like progbar.

auto_hash_path_relevant_opts()[source]

Automatically hash the path relevant options used to create the optimizer.

hash_query(inputs, output, size_dict)[source]

Hash the contraction specification, returning this and whether the contraction is already present as a tuple.

class cotengra.hyperoptimizers.hyper.ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]

Bases: ReusableOptmizer

Like HyperOptimizer but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found.

Parameters:
  • directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If True auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).

  • overwrite (bool, optional) – If True, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.

  • set_surface_order (bool, optional) – If True, when reloading a path to turn into a ContractionTree, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.

  • hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default, 'a', is faster hashwise but doesn’t recognize when indices are permuted.

  • cache_only (bool, optional) – If True, the optimizer will only use the cache, and will raise KeyError if a contraction is not found.

  • opt_kwargs – Supplied to HyperOptimizer.

suboptimizer[source]
set_surface_order = False
get_path_relevant_opts()[source]

Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified.

property minimize
update_from_tree(tree, overwrite='improved')[source]

Explicitly add the contraction that tree represents into the cache. For example, if you have manually improved it via reconfing. If overwrite=False and the contracton is present already then do nothing. If overwrite='improved' then only overwrite if the new path is better. If overwrite=True then always overwrite.

Parameters:
  • tree (ContractionTree) – The tree to add to the cache.

  • overwrite (bool or "improved", optional) – If True always overwrite, if False only overwrite if the contraction is missing, if 'improved' only overwrite if the new path is better (the default). Note that the comparison of scores is based on default objective of the tree.

_run_optimizer(inputs, output, size_dict)[source]
_maybe_run_optimizer(inputs, output, size_dict)[source]
__call__(inputs, output, size_dict, memory_limit=None)[source]
search(inputs, output, size_dict)[source]
cleanup()[source]
class cotengra.hyperoptimizers.hyper.HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]

Bases: HyperOptimizer

A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them.

Parameters:
  • chi (None or int, optional) – The maximum bond dimension to compress to. If None then use the square of the largest existing dimension. If minimize is specified as a full scoring function, this is ignored.

  • methods (None or sequence[str] or str, optional) – Which method(s) to use from list_hyper_functions().

  • minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a trial dict as its argument and return a single float.

  • max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.

  • max_time (None or float, optional) – The maximum amount of time to run for. Use None for no limit. You can also set an estimated execution ‘rate’ here like 'rate:1e9' that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.

  • parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.

  • slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.

  • slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.

  • reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.

  • optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.

  • space (dict, optional) – The hyper space to search, see get_hyper_space for the default.

  • score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.

  • max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).

  • progbar (bool, optional) – Show live progress of the best contraction found so far.

  • optlib_opts – Supplied to the hyper-optimizer library initialization.

compressed = True
multicontraction = False
class cotengra.hyperoptimizers.hyper.ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]

Bases: ReusableHyperOptimizer

Like HyperCompressedOptimizer but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found.

Parameters:
  • chi (None or int, optional) – The maximum bond dimension to compress to. If None then use the square of the largest existing dimension. If minimize is specified as a full scoring function, this is ignored.

  • directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If True auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).

  • overwrite (bool, optional) – If True, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.

  • set_surface_order (bool, optional) – If True, when reloading a path to turn into a ContractionTree, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.

  • hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default, 'a', is faster hashwise but doesn’t recognize when indices are permuted.

  • cache_only (bool, optional) – If True, the optimizer will only use the cache, and will raise KeyError if a contraction is not found.

  • opt_kwargs – Supplied to HyperCompressedOptimizer.

suboptimizer[source]
set_surface_order = True
class cotengra.hyperoptimizers.hyper.HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]

Bases: HyperOptimizer

A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.

Parameters:
  • methods (None or sequence[str] or str, optional) – Which method(s) to use from list_hyper_functions().

  • minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a trial dict as its argument and return a single float.

  • max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.

  • max_time (None or float, optional) – The maximum amount of time to run for. Use None for no limit. You can also set an estimated execution ‘rate’ here like 'rate:1e9' that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.

  • parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.

  • slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.

  • slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.

  • reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.

  • optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.

  • space (dict, optional) – The hyper space to search, see get_hyper_space for the default.

  • score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.

  • on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If 'warn' (default), a warning will be printed and the trial will be given a score of inf. If 'raise' the error will be raised. If 'ignore' the trial will be given a score of inf silently.

  • max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).

  • progbar (bool, optional) – Show live progress of the best contraction found so far.

  • optlib_opts – Supplied to the hyper-optimizer library initialization.

compressed = False
multicontraction = True