cotengra.hyper#

Module Contents#

Classes#

TrialConvertTree

TrialTreeMulti

SlicedTrialFn

ReconfTrialFn

SlicedReconfTrialFn

CompressedReconfTrial

ComputeScore

HyperOptimizer

A path optimizer that samples a series of contraction trees

ReusableHyperOptimizer

Like HyperOptimizer but it will re-instantiate the optimizer

Functions#

get_hyper_space()

get_hyper_constants()

register_hyper_optlib(name, init_optimizers, ...)

register_hyper_function(name, ssa_func, space[, constants])

Register a contraction path finder to be used by the hyper-optimizer.

list_hyper_functions()

Return a list of currently registered hyper contraction finders.

find_tree(*args, **kwargs)

progress_description(best)

sortedtuple(x)

hash_contraction_a(inputs, output, size_dict)

hash_contraction_b(inputs, output, size_dict)

hash_contraction(inputs, output, size_dict[, method])

Compute a hash for a particular contraction geometry.

Attributes#

cotengra.hyper.DEFAULT_METHODS = ['greedy']#
cotengra.hyper.DEFAULT_OPTLIB = optuna#
cotengra.hyper._PATH_FNS#
cotengra.hyper._OPTLIB_FNS#
cotengra.hyper._HYPER_SEARCH_SPACE#
cotengra.hyper._HYPER_CONSTANTS#
cotengra.hyper.get_hyper_space()#
cotengra.hyper.get_hyper_constants()#
cotengra.hyper.register_hyper_optlib(name, init_optimizers, get_setting, report_result)#
cotengra.hyper.register_hyper_function(name, ssa_func, space, constants=None)#

Register a contraction path finder to be used by the hyper-optimizer.

Parameters:
  • name (str) – The name to call the method.

  • ssa_func (callable) – The raw opt_einsum style function that returns a ‘ContractionTree’.

  • space (dict[str, dict]) – The space of hyper-parameters to search.

cotengra.hyper.list_hyper_functions()#

Return a list of currently registered hyper contraction finders.

cotengra.hyper.find_tree(*args, **kwargs)#
class cotengra.hyper.TrialConvertTree(trial_fn, cls)#
__call__(*args, **kwargs)#
class cotengra.hyper.TrialTreeMulti(trial_fn, varmults, numconfigs)#
__call__(*args, **kwargs)#
class cotengra.hyper.SlicedTrialFn(trial_fn, **opts)#
__call__(*args, **kwargs)#
class cotengra.hyper.ReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)#
__call__(*args, **kwargs)#
class cotengra.hyper.SlicedReconfTrialFn(trial_fn, forested=False, parallel=False, **opts)#
__call__(*args, **kwargs)#
class cotengra.hyper.CompressedReconfTrial(trial_fn, chi, **opts)#
__call__(*args, **kwargs)#
class cotengra.hyper.ComputeScore(fn, score_fn, score_compression)#
__call__(*args, **kwargs)#
cotengra.hyper.progress_description(best)#
class cotengra.hyper.HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=DEFAULT_OPTLIB, space=None, score_compression=0.75, max_training_steps=None, compressed=False, multicontraction=False, progbar=False, **optlib_opts)#

Bases: opt_einsum.paths.PathOptimizer

A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.

Parameters:
  • methods (None or sequence[str] or str, optional) – Which method(s) to use from list_hyper_functions().

  • minimize ({'flops', 'write', 'size', 'combo' or callable}, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a trial dict as its argument and return a single float.

  • max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.

  • max_time (None or float, optional) – The maximum amount of time to run for. Use None for no limit. You can also set an estimated execution ‘rate’ here like 'rate:1e9' that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.

  • parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.

  • slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.

  • slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.

  • reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.

  • optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.

  • space (dict, optional) – The hyper space to search, see get_hyper_space for the default.

  • score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.

  • max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).

  • progbar (bool, optional) – Show live progress of the best contraction found so far.

  • optlib_opts – Supplied to the hyper-optimizer library initialization.

plot_trials#
plot_trials_alt#
plot_scatter#
plot_scatter_alt#
property minimize#
property parallel#
property tree#
property path#
setup(inputs, output, size_dict)#
get_score(trial)#
_maybe_cancel_futures()#
_maybe_report_result(setting, trial)#
_gen_results(repeats, trial_fn, trial_args)#
_get_and_report_next_future()#
_gen_results_parallel(repeats, trial_fn, trial_args)#
search(inputs, output, size_dict)#

Run this optimizer and return the ContractionTree for the best path it finds.

get_tree()#

Return the ContractionTree for the best path found.

__call__(inputs, output, size_dict, memory_limit=None)#

opt_einsum interface, returns direct path.

get_trials(sort=None)#
print_trials(sort=None)#
to_df()#
cotengra.hyper.sortedtuple(x)#
cotengra.hyper.hash_contraction_a(inputs, output, size_dict)#
cotengra.hyper.hash_contraction_b(inputs, output, size_dict)#
cotengra.hyper.hash_contraction(inputs, output, size_dict, method='a')#

Compute a hash for a particular contraction geometry.

class cotengra.hyper.ReusableHyperOptimizer(*opt_args, directory=None, overwrite=False, set_surface_order=False, hash_method='a', cache_only=False, **opt_kwargs)#

Bases: opt_einsum.paths.PathOptimizer

Like HyperOptimizer but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found.

Parameters:
  • opt_args – Supplied to HyperOptimizer.

  • directory (None or str, optional) – If specified use this directory as a persistent cache.

  • overwrite (bool, optional) – If True, the optimizer will always run, overwriting old results in the cache. This can be used to update paths with deleting the whole cache.

  • set_surface_order (bool, optional) – If True, when reloading a path to turn into a ContractionTree, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.

  • hash_method ({'a', ...}, optional) – The method used to hash the contraction tree. The default, 'a', is faster but doesn’t recognize when indices are permuted.

  • cache_only (bool, optional) – If True, the optimizer will only use the cache, and will raise KeyError if a contraction is not found.

  • opt_kwargs – Supplied to HyperOptimizer.

property last_opt#
hash_query(inputs, output, size_dict)#

Hash the contraction specification, returning this and whether the contraction is already present as a tuple.

_compute_path(inputs, output, size_dict)#
update_from_tree(tree, overwrite=True)#

Explicitly add the contraction that tree represents into the cache. For example, if you have manually improved it via reconfing. If overwrite=False and the contracton is present already then do nothing.

__call__(inputs, output, size_dict, memory_limit=None)#
search(inputs, output, size_dict)#
cleanup()#