:py:mod:`cotengra.hyperoptimizers.hyper` ======================================== .. py:module:: cotengra.hyperoptimizers.hyper .. autoapi-nested-parse:: Base hyper optimization functionality. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: cotengra.hyperoptimizers.hyper.TrialSetObjective cotengra.hyperoptimizers.hyper.TrialConvertTree cotengra.hyperoptimizers.hyper.TrialTreeMulti cotengra.hyperoptimizers.hyper.SlicedTrialFn cotengra.hyperoptimizers.hyper.SimulatedAnnealingTrialFn cotengra.hyperoptimizers.hyper.ReconfTrialFn cotengra.hyperoptimizers.hyper.SlicedReconfTrialFn cotengra.hyperoptimizers.hyper.CompressedReconfTrial cotengra.hyperoptimizers.hyper.ComputeScore cotengra.hyperoptimizers.hyper.HyperOptimizer cotengra.hyperoptimizers.hyper.ReusableHyperOptimizer cotengra.hyperoptimizers.hyper.HyperCompressedOptimizer cotengra.hyperoptimizers.hyper.ReusableHyperCompressedOptimizer cotengra.hyperoptimizers.hyper.HyperMultiOptimizer Functions ~~~~~~~~~ .. autoapisummary:: cotengra.hyperoptimizers.hyper.get_default_hq_methods cotengra.hyperoptimizers.hyper.get_default_optlib cotengra.hyperoptimizers.hyper.get_hyper_space cotengra.hyperoptimizers.hyper.get_hyper_constants cotengra.hyperoptimizers.hyper.register_hyper_optlib cotengra.hyperoptimizers.hyper.register_hyper_function cotengra.hyperoptimizers.hyper.list_hyper_functions cotengra.hyperoptimizers.hyper.base_trial_fn cotengra.hyperoptimizers.hyper.progress_description cotengra.hyperoptimizers.hyper.sortedtuple cotengra.hyperoptimizers.hyper.make_hashable cotengra.hyperoptimizers.hyper.hash_contraction_a cotengra.hyperoptimizers.hyper.hash_contraction_b cotengra.hyperoptimizers.hyper.hash_contraction Attributes ~~~~~~~~~~ .. autoapisummary:: cotengra.hyperoptimizers.hyper._PATH_FNS cotengra.hyperoptimizers.hyper._OPTLIB_FNS cotengra.hyperoptimizers.hyper._HYPER_SEARCH_SPACE cotengra.hyperoptimizers.hyper._HYPER_CONSTANTS .. py:function:: get_default_hq_methods() .. py:function:: get_default_optlib() .. py:data:: _PATH_FNS .. py:data:: _OPTLIB_FNS .. py:data:: _HYPER_SEARCH_SPACE .. py:data:: _HYPER_CONSTANTS .. py:function:: get_hyper_space() .. py:function:: get_hyper_constants() .. py:function:: register_hyper_optlib(name, init_optimizers, get_setting, report_result) .. py:function:: register_hyper_function(name, ssa_func, space, constants=None) Register a contraction path finder to be used by the hyper-optimizer. :param name: The name to call the method. :type name: str :param ssa_func: The raw function that returns a 'ContractionTree', with signature ``(inputs, output, size_dict, **kwargs)``. :type ssa_func: callable :param space: The space of hyper-parameters to search. :type space: dict[str, dict] .. py:function:: list_hyper_functions() Return a list of currently registered hyper contraction finders. .. py:function:: base_trial_fn(inputs, output, size_dict, method, **kwargs) .. py:class:: TrialSetObjective(trial_fn, objective) .. py:method:: __call__(*args, **kwargs) .. py:class:: TrialConvertTree(trial_fn, cls) .. py:method:: __call__(*args, **kwargs) .. py:class:: TrialTreeMulti(trial_fn, varmults, numconfigs) .. py:method:: __call__(*args, **kwargs) .. py:class:: SlicedTrialFn(trial_fn, **opts) .. py:method:: __call__(*args, **kwargs) .. py:class:: SimulatedAnnealingTrialFn(trial_fn, **opts) .. py:method:: __call__(*args, **kwargs) .. py:class:: ReconfTrialFn(trial_fn, forested=False, parallel=False, **opts) .. py:method:: __call__(*args, **kwargs) .. py:class:: SlicedReconfTrialFn(trial_fn, forested=False, parallel=False, **opts) .. py:method:: __call__(*args, **kwargs) .. py:class:: CompressedReconfTrial(trial_fn, minimize, **opts) .. py:method:: __call__(*args, **kwargs) .. py:class:: ComputeScore(fn, score_fn, score_compression=0.75, score_smudge=1e-06, on_trial_error='warn', seed=0) The final score wrapper, that performs some simple arithmetic on the trial score to make it more suitable for hyper-optimization. .. py:method:: __call__(*args, **kwargs) .. py:function:: progress_description(best, info='concise') .. py:class:: HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts) Bases: :py:obj:`cotengra.oe.PathOptimizer` A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them. :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param on_trial_error: What to do if a trial fails. If ``'warn'`` (default), a warning will be printed and the trial will be given a score of ``inf``. If ``'raise'`` the error will be raised. If ``'ignore'`` the trial will be given a score of ``inf`` silently. :type on_trial_error: {'warn', 'raise', 'ignore'}, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:property:: minimize .. py:property:: parallel .. py:property:: tree .. py:property:: path .. py:attribute:: compressed :value: False .. py:attribute:: multicontraction :value: False .. py:attribute:: plot_trials .. py:attribute:: plot_trials_alt .. py:attribute:: plot_scatter .. py:attribute:: plot_scatter_alt .. py:method:: setup(inputs, output, size_dict) .. py:method:: _maybe_cancel_futures() .. py:method:: _maybe_report_result(setting, trial) .. py:method:: _gen_results(repeats, trial_fn, trial_args) .. py:method:: _get_and_report_next_future() .. py:method:: _gen_results_parallel(repeats, trial_fn, trial_args) .. py:method:: _search(inputs, output, size_dict) .. py:method:: search(inputs, output, size_dict) Run this optimizer and return the ``ContractionTree`` for the best path it finds. .. py:method:: get_tree() Return the ``ContractionTree`` for the best path found. .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) ``opt_einsum`` interface, returns direct ``path``. .. py:method:: get_trials(sort=None) .. py:method:: print_trials(sort=None) .. py:method:: to_df() Create a single ``pandas.DataFrame`` with all trials and scores. .. py:method:: to_dfs_parametrized() Create a ``pandas.DataFrame`` for each method, with all parameters and scores for each trial. .. py:function:: sortedtuple(x) .. py:function:: make_hashable(x) Make ``x`` hashable by recursively turning list into tuples and dicts into sorted tuples of key-value pairs. .. py:function:: hash_contraction_a(inputs, output, size_dict) .. py:function:: hash_contraction_b(inputs, output, size_dict) .. py:function:: hash_contraction(inputs, output, size_dict, method='a') Compute a hash for a particular contraction geometry. .. py:class:: ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs) Bases: :py:obj:`cotengra.oe.PathOptimizer` Like ``HyperOptimizer`` but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found. :param directory: If specified use this directory as a persistent cache. If ``True`` auto generate a directory in the current working directory based on the options which are most likely to affect the path (see `ReusableHyperOptimizer.get_path_relevant_opts`). :type directory: None, True, or str, optional :param overwrite: If ``True``, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache. :type overwrite: bool, optional :param set_surface_order: If ``True``, when reloading a path to turn into a ``ContractionTree``, the 'surface order' of the path (used for compressed paths), will be set manually to the order the disk path is. :type set_surface_order: bool, optional :param hash_method: The method used to hash the contraction tree. The default, ``'a'``, is faster hashwise but doesn't recognize when indices are permuted. :type hash_method: {'a', 'b', ...}, optional :param cache_only: If ``True``, the optimizer will only use the cache, and will raise ``KeyError`` if a contraction is not found. :type cache_only: bool, optional :param opt_kwargs: Supplied to ``HyperOptimizer``. .. py:property:: last_opt .. py:attribute:: suboptimizer .. py:attribute:: set_surface_order :value: False .. py:method:: get_path_relevant_opts() Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified. .. py:method:: auto_hash_path_relevant_opts() Automatically hash the path relevant options used to create the optimizer. .. py:method:: hash_query(inputs, output, size_dict) Hash the contraction specification, returning this and whether the contraction is already present as a tuple. .. py:method:: _compute_path(inputs, output, size_dict) .. py:method:: update_from_tree(tree, overwrite=True) Explicitly add the contraction that ``tree`` represents into the cache. For example, if you have manually improved it via reconfing. If ``overwrite=False`` and the contracton is present already then do nothing. .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) .. py:method:: search(inputs, output, size_dict) .. py:method:: cleanup() .. py:class:: HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs) Bases: :py:obj:`HyperOptimizer` A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them. :param chi: The maximum bond dimension to compress to. If ``None`` then use the square of the largest existing dimension. If ``minimize`` is specified as a full scoring function, this is ignored. :type chi: None or int, optional :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:attribute:: compressed :value: True .. py:attribute:: multicontraction :value: False .. py:class:: ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs) Bases: :py:obj:`ReusableHyperOptimizer` Like ``HyperCompressedOptimizer`` but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found. :param chi: The maximum bond dimension to compress to. If ``None`` then use the square of the largest existing dimension. If ``minimize`` is specified as a full scoring function, this is ignored. :type chi: None or int, optional :param directory: If specified use this directory as a persistent cache. If ``True`` auto generate a directory in the current working directory based on the options which are most likely to affect the path (see `ReusableHyperOptimizer.get_path_relevant_opts`). :type directory: None, True, or str, optional :param overwrite: If ``True``, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache. :type overwrite: bool, optional :param set_surface_order: If ``True``, when reloading a path to turn into a ``ContractionTree``, the 'surface order' of the path (used for compressed paths), will be set manually to the order the disk path is. :type set_surface_order: bool, optional :param hash_method: The method used to hash the contraction tree. The default, ``'a'``, is faster hashwise but doesn't recognize when indices are permuted. :type hash_method: {'a', 'b', ...}, optional :param cache_only: If ``True``, the optimizer will only use the cache, and will raise ``KeyError`` if a contraction is not found. :type cache_only: bool, optional :param opt_kwargs: Supplied to ``HyperCompressedOptimizer``. .. py:attribute:: suboptimizer .. py:attribute:: set_surface_order :value: True .. py:class:: HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts) Bases: :py:obj:`HyperOptimizer` A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them. :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param on_trial_error: What to do if a trial fails. If ``'warn'`` (default), a warning will be printed and the trial will be given a score of ``inf``. If ``'raise'`` the error will be raised. If ``'ignore'`` the trial will be given a score of ``inf`` silently. :type on_trial_error: {'warn', 'raise', 'ignore'}, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:attribute:: compressed :value: False .. py:attribute:: multicontraction :value: True