:py:mod:`cotengra` ================== .. py:module:: cotengra .. autoapi-nested-parse:: Hyper optimized contraction trees for large tensor networks and einsums. Subpackages ----------- .. toctree:: :titlesonly: :maxdepth: 3 experimental/index.rst hyperoptimizers/index.rst pathfinders/index.rst Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 _version/index.rst contract/index.rst core/index.rst core_multi/index.rst hypergraph/index.rst interface/index.rst oe/index.rst parallel/index.rst plot/index.rst presets/index.rst schematic/index.rst scoring/index.rst slicer/index.rst utils/index.rst Package Contents ---------------- Classes ~~~~~~~ .. autoapisummary:: cotengra.ContractionTree cotengra.ContractionTreeCompressed cotengra.ContractionTreeMulti cotengra.HyperGraph cotengra.HyperCompressedOptimizer cotengra.HyperMultiOptimizer cotengra.HyperOptimizer cotengra.ReusableHyperCompressedOptimizer cotengra.ReusableHyperOptimizer cotengra.GreedyOptimizer cotengra.OptimalOptimizer cotengra.RandomGreedyOptimizer cotengra.FlowCutterOptimizer cotengra.QuickBBOptimizer cotengra.AutoHQOptimizer cotengra.AutoOptimizer cotengra.SliceFinder Functions ~~~~~~~~~ .. autoapisummary:: cotengra.get_hypergraph cotengra.get_hyper_space cotengra.hash_contraction cotengra.list_hyper_functions cotengra.array_contract cotengra.array_contract_expression cotengra.array_contract_path cotengra.array_contract_tree cotengra.einsum cotengra.einsum_expression cotengra.einsum_tree cotengra.register_preset cotengra.optimize_flowcutter cotengra.optimize_quickbb cotengra.plot_contractions cotengra.plot_contractions_alt cotengra.plot_scatter cotengra.plot_scatter_alt cotengra.plot_slicings cotengra.plot_slicings_alt cotengra.plot_tree cotengra.plot_tree_ring cotengra.plot_tree_span cotengra.plot_tree_tent cotengra.plot_trials cotengra.plot_trials_alt cotengra.get_symbol cotengra.get_symbol_map cotengra.hyper_optimize Attributes ~~~~~~~~~~ .. autoapisummary:: cotengra.auto_hq_optimize cotengra.auto_optimize cotengra.greedy_optimize cotengra.optimal_optimize cotengra.optimal_outer_optimize cotengra.UniformOptimizer cotengra.QuasiRandOptimizer cotengra.contract_expression cotengra.contract .. py:class:: ContractionTree(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None) Binary tree representing a tensor network contraction. :param inputs: The list of input tensor's indices. :type inputs: sequence of str :param output: The output indices. :type output: str :param size_dict: The size of each index. :type size_dict: dict[str, int] :param track_childless: Whether to dynamically keep track of which nodes are childless. Useful if you are 'divisively' building the tree. :type track_childless: bool, optional :param track_flops: Whether to dynamically keep track of the total number of flops. If ``False`` You can still compute this once the tree is complete. :type track_flops: bool, optional :param track_write: Whether to dynamically keep track of the total number of elements written. If ``False`` You can still compute this once the tree is complete. :type track_write: bool, optional :param track_size: Whether to dynamically keep track of the largest tensor so far. If ``False`` You can still compute this once the tree is complete. :type track_size: bool, optional :param objective: An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed. :type objective: str or Objective, optional .. attribute:: children Mapping of each node to two children. :type: dict[node, tuple[node]] .. attribute:: info Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective 'leg' indices, size, flops of formation etc. :type: dict[node, dict] .. py:property:: nslices Simple alias for how many independent contractions this tree represents overall. .. py:property:: nchunks The number of 'chunks' - determined by the number of sliced output indices. .. py:attribute:: total_cost .. py:attribute:: contraction_cost_compressed .. py:attribute:: total_cost_compressed .. py:attribute:: remove_ind_ .. py:attribute:: restore_ind_ .. py:attribute:: unslice_rand_ .. py:attribute:: unslice_all_ .. py:attribute:: subtree_reconfigure_ .. py:attribute:: subtree_reconfigure_forest_ .. py:attribute:: simulated_anneal .. py:attribute:: simulated_anneal_ .. py:attribute:: parallel_temper .. py:attribute:: parallel_temper_ .. py:attribute:: slice_ .. py:attribute:: slice_and_reconfigure_ .. py:attribute:: slice_and_reconfigure_forest_ .. py:attribute:: compressed_reconfigure_ .. py:attribute:: windowed_reconfigure_ .. py:attribute:: path .. py:attribute:: ssa_path .. py:attribute:: path_surface .. py:attribute:: ssa_path_surface .. py:attribute:: plot_ring .. py:attribute:: plot_tent .. py:attribute:: plot_span .. py:attribute:: plot_flat .. py:attribute:: plot_circuit .. py:attribute:: plot_rubberband .. py:attribute:: plot_contractions .. py:attribute:: plot_contractions_alt .. py:method:: set_state_from(other) Set the internal state of this tree to that of ``other``. .. py:method:: copy() Create a copy of this ``ContractionTree``. .. py:method:: set_default_objective(objective) Set the objective function for this tree. .. py:method:: get_default_objective() Get the objective function for this tree. .. py:method:: get_default_combo_factor() Get the default combo factor for this tree. .. py:method:: node_to_terms(node) Turn a node -- a frozen set of ints -- into the corresponding terms -- a sequence of sets of str corresponding to input indices. .. py:method:: gen_leaves() Generate the nodes representing leaves of the contraction tree, i.e. of size 1 each corresponding to a single input tensor. .. py:method:: get_incomplete_nodes() Get the set of current nodes that have no children and the set of nodes that have no parents. These are the 'childless' and 'parentless' nodes respectively, that need to be contracted to complete the tree. The parentless nodes are grouped into the childless nodes that contain them as subgraphs. :returns: **groups** -- A mapping of childless nodes to the list of parentless nodes are beneath them. :rtype: dict[frozenet[int], list[frozenset[int]]] .. seealso:: :obj:`autocomplete` .. py:method:: autocomplete(**contract_opts) Contract all remaining node groups (as computed by ``tree.get_incomplete_nodes``) in the tree to complete it. :param contract_opts: Options to pass to ``tree.contract_nodes``. .. seealso:: :obj:`get_incomplete_nodes`, :obj:`contract_nodes` .. py:method:: from_path(inputs, output, size_dict, *, path=None, ssa_path=None, autocomplete='auto', check=False, **kwargs) :classmethod: Create a (completed) ``ContractionTree`` from the usual inputs plus a standard contraction path or 'ssa_path' - you need to supply one. :param inputs: The input indices of each tensor, as single unicode characters. :type inputs: Sequence[Sequence[str]] :param output: The output indices. :type output: Sequence[str] :param size_dict: The size of each index. :type size_dict: dict[str, int] :param path: The contraction path, a sequence of pairs of tensor ids to contract. The ids are linear indices into the list of temporary tensors, which are recycled as each contraction pops a pair and appends the result. This or ``ssa_path`` must be supplied. :type path: Sequence[Sequence[int]], optional :param ssa_path: The contraction path, a sequence of pairs of indices to contract. The indices are single use, as if the result of each contraction is appended to the end of the list of temporary tensors without popping. This or ``path`` must be supplied. :type ssa_path: Sequence[Sequence[int]], optional :param autocomplete: Whether to automatically complete the path, i.e. contract all remaining nodes. If "auto" then a warning is issued if the path is not complete. :type autocomplete: "auto" or bool, optional :param check: Whether to perform some basic checks while creating the contraction nodes. :type check: bool, optional :rtype: ContractionTree .. py:method:: from_info(info, **kwargs) :classmethod: Create a ``ContractionTree`` from an ``opt_einsum.PathInfo`` object. .. py:method:: from_eq(eq, size_dict, **kwargs) :classmethod: Create a empty ``ContractionTree`` directly from an equation and set of shapes. :param eq: The einsum string equation. :type eq: str :param size_dict: The size of each index. :type size_dict: dict[str, int] .. py:method:: get_eq() Get the einsum equation corresponding to this tree. Note that this is the total (or original) equation, so includes indices which have been sliced. :returns: **eq** :rtype: str .. py:method:: get_shapes() Get the shapes of the input tensors corresponding to this tree. :returns: **shapes** :rtype: tuple[tuple[int]] .. py:method:: get_inputs_sliced() Get the input indices corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **inputs** :rtype: tuple[tuple[str]] .. py:method:: get_output_sliced() Get the output indices corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **output** :rtype: tuple[str] .. py:method:: get_eq_sliced() Get the einsum equation corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **eq** :rtype: str .. py:method:: get_shapes_sliced() Get the shapes of the input tensors corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **shapes** :rtype: tuple[tuple[int]] .. py:method:: from_edge_path(edge_path, inputs, output, size_dict, check=False, **kwargs) :classmethod: Create a ``ContractionTree`` from an edge elimination ordering. .. py:method:: _add_node(node, check=False) .. py:method:: _remove_node(node) Remove ``node`` from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing. .. py:method:: compute_leaf_legs(i) Compute the effective 'outer' indices for the ith input tensor. This is not always simply the ith input indices, due to A) potential slicing and B) potential preprocessing. .. py:method:: has_preprocessing() .. py:method:: get_legs(node) Get the effective 'outer' indices for the collection of tensors in ``node``. .. py:method:: get_involved(node) Get all the indices involved in the formation of subgraph ``node``. .. py:method:: get_size(node) Get the tensor size of ``node``. .. py:method:: get_flops(node) Get the FLOPs for the pairwise contraction that will create ``node``. .. py:method:: get_can_dot(node) Get whether this contraction can be performed as a dot product (i.e. with ``tensordot``), or else requires ``einsum``, as it has indices that don't appear exactly twice in either the inputs or the output. .. py:method:: get_inds(node) Get the indices of this node - an ordered string version of ``get_legs`` that starts with ``tree.inputs`` and maintains the order they appear in each contraction 'ABC,abc->ABCabc', to match tensordot. .. py:method:: get_tensordot_axes(node) Get the ``axes`` arg for a tensordot ocontraction that produces ``node``. The pairs are sorted in order of appearance on the left input. .. py:method:: get_tensordot_perm(node) Get the permutation required, if any, to bring the tensordot output of this nodes contraction into line with ``self.get_inds(node)``. .. py:method:: get_einsum_eq(node) Get the einsum string describing the contraction that produces ``node``, unlike ``get_inds`` the characters are mapped into [a-zA-Z], for compatibility with ``numpy.einsum`` for example. .. py:method:: get_centrality(node) .. py:method:: total_flops(dtype=None, log=None) Sum the flops contribution from every node in the tree. :param dtype: Scale the answer depending on the assumed data type. :type dtype: {'float', 'complex', None}, optional .. py:method:: total_write() Sum the total amount of memory that will be created and operated on. .. py:method:: combo_cost(factor=DEFAULT_COMBO_FACTOR, combine=sum, log=None) .. py:method:: max_size(log=None) The size of the largest intermediate tensor. .. py:method:: peak_size(order=None, log=None) Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree. .. py:method:: contract_stats(force=False) Simulteneously compute the total flops, write and size of the contraction tree. This is more efficient than calling each of the individual methods separately. Once computed, each quantity is then automatically tracked. :returns: **stats** -- The total flops, write and size. :rtype: dict[str, int] .. py:method:: arithmetic_intensity() The ratio of total flops to total write - the higher the better for extracting good computational performance. .. py:method:: contraction_scaling() This is computed simply as the maximum number of indices involved in any single contraction, which will match the scaling assuming that all dimensions are equal. .. py:method:: contraction_cost(log=None) Get the total number of scalar operations ~ time complexity. .. py:method:: contraction_width(log=2) Get log2 of the size of the largest tensor. .. py:method:: compressed_contract_stats(chi=None, order='surface_order', compress_late=None) .. py:method:: total_flops_compressed(chi=None, order='surface_order', compress_late=None, dtype=None, log=None) Estimate the total flops for a compressed contraction of this tree with maximum bond size ``chi``. This includes basic estimates of the ops to perform contractions, QRs and SVDs. .. py:method:: total_write_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None) Compute the total size of all intermediate tensors when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is relevant maybe for time complexity and e.g. autodiff space complexity (since every intermediate is kept). .. py:method:: combo_cost_compressed(chi=None, order='surface_order', compress_late=None, factor=None, log=None) .. py:method:: max_size_compressed(chi=None, order='surface_order', compress_late=None, log=None) Compute the maximum sized tensor produced when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is close to the ideal space complexity if only tensors that are being directly operated on are kept in memory. .. py:method:: peak_size_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None) Compute the peak size of combined intermediate tensors when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is the practical space complexity if one is not swapping intermediates in and out of memory. .. py:method:: contraction_width_compressed(chi=None, order='surface_order', compress_late=None, log=2) Compute log2 of the maximum sized tensor produced when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. .. py:method:: _update_tracked(node) .. py:method:: contract_nodes_pair(x, y, legs=None, cost=None, size=None, check=False) Contract node ``x`` with node ``y`` in the tree to create a new parent node, which is returned. :param x: The first node to contract. :type x: frozenset[int] :param y: The second node to contract. :type y: frozenset[int] :param legs: The effective 'legs' of the new node if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type legs: dict[str, int], optional :param cost: The cost of the contraction if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type cost: int, optional :param size: The size of the new node if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type size: int, optional :param check: Whether to check the inputs are valid. :type check: bool, optional :returns: **parent** -- The new parent node of ``x`` and ``y``. :rtype: frozenset[int] .. py:method:: contract_nodes(nodes, optimize='auto-hq', check=False, extra_opts=None) Contract an arbitrary number of ``nodes`` in the tree to build up a subtree. The root of this subtree (a new intermediate) is returned. .. py:method:: is_complete() Check every node has two children, unless it is a leaf. .. py:method:: get_default_order() .. py:method:: _traverse_ordered(order) Traverse the tree in the order that minimizes ``order(node)``, but still constrained to produce children before parents. .. py:method:: traverse(order=None) Generate, in order, all the node merges in this tree. Non-recursive! This ensures children are always visited before their parent. :param order: How to order the contractions within the tree. If a callable is given (which should take a node as its argument), try to contract nodes that minimize this function first. :type order: None or callable, optional :returns: The bottom up ordered sequence of tree merges, each a tuple of ``(parent, left_child, right_child)``. :rtype: generator[tuple[node]] .. seealso:: :obj:`descend` .. py:method:: descend(mode='dfs') Generate, from root to leaves, all the node merges in this tree. Non-recursive! This ensures parents are visited before their children. :param mode: How expand from a parent. :type mode: {'dfs', bfs}, optional :returns: The top down ordered sequence of tree merges, each a tuple of ``(parent, left_child, right_child)``. :rtype: generator[tuple[node] .. seealso:: :obj:`traverse` .. py:method:: get_subtree(node, size, search='bfs', seed=None) Get a subtree spanning down from ``node`` which will have ``size`` leaves (themselves not necessarily leaves of the actual tree). :param node: The node of the tree to start with. :type node: node :param size: How many subtree leaves to aim for. :type size: int :param search: How to build the tree: - 'bfs': breadth first expansion - 'dfs': depth first expansion (largest nodes first) - 'random': random expansion :type search: {'bfs', 'dfs', 'random'}, optional :param seed: Random number generator seed, if ``search`` is 'random'. :type seed: None, int or random.Random, optional :returns: * **sub_leaves** (*tuple[node]*) -- Nodes which are subtree leaves. * **branches** (*tuple[node]*) -- Nodes which are between the subtree leaves and root. .. py:method:: remove_ind(ind, project=None, inplace=False) Remove (i.e. by default slice) index ``ind`` from this contraction tree, taking care to update all relevant information about each node. .. py:method:: restore_ind(ind, inplace=False) Restore (unslice or un-project) index ``ind`` to this contraction tree, taking care to update all relevant information about each node. :param ind: The index to restore. :type ind: str :param inplace: Whether to perform the restoration inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: unslice_rand(seed=None, inplace=False) Unslice (restore) a random index from this contraction tree. :param seed: Random number generator seed. :type seed: None, int or random.Random, optional :param inplace: Whether to perform the unslicing inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: unslice_all(inplace=False) Unslice (restore) all sliced indices from this contraction tree. :param inplace: Whether to perform the unslicing inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: calc_subtree_candidates(pwr=2, what='flops') .. py:method:: subtree_reconfigure(subtree_size=8, subtree_search='bfs', weight_what='flops', weight_pwr=2, select='max', maxiter=500, seed=None, minimize=None, optimize=None, inplace=False, progbar=False) Reconfigure subtrees of this tree with locally optimal paths. :param subtree_size: The size of subtree to consider. Cost is exponential in this. :type subtree_size: int, optional :param subtree_search: How to build the subtrees: - 'bfs': breadth-first-search creating balanced subtrees - 'dfs': depth-first-search creating imbalanced subtrees - 'random': random subtree building :type subtree_search: {'bfs', 'dfs', 'random'}, optional :param weight_what: When assessing nodes to build and optimize subtrees from whether to score them by the (local) contraction cost, or tensor size. :type weight_what: {'flops', 'size'}, optional :param weight_pwr: When assessing nodes to build and optimize subtrees from, how to scale their score into a probability: ``score**(1 / weight_pwr)``. The larger this is the more explorative the algorithm is when ``select='random'``. :type weight_pwr: int, optional :param select: What order to select node subtrees to optimize: - 'max': choose the highest score first - 'min': choose the lowest score first - 'random': choose randomly weighted on score -- see ``weight_pwr``. :type select: {'max', 'min', 'random'}, optional :param maxiter: How many subtree optimizations to perform, the algorithm can terminate before this if all subtrees have been optimized. :type maxiter: int, optional :param seed: A random seed (seeds python system random module). :type seed: int, optional :param minimize: Whether to minimize with respect to contraction flops or size. :type minimize: {'flops', 'size'}, optional :param inplace: Whether to perform the reconfiguration inplace or not. :type inplace: bool, optional :param progbar: Whether to show live progress of the reconfiguration. :type progbar: bool, optional :rtype: ContractionTree .. py:method:: subtree_reconfigure_forest(num_trees=8, num_restarts=10, restart_fraction=0.5, subtree_maxiter=100, subtree_size=10, subtree_search=('random', 'bfs'), subtree_select=('random', ), subtree_weight_what=('flops', 'size'), subtree_weight_pwr=(2, ), parallel='auto', parallel_maxiter_steps=4, minimize=None, seed=None, progbar=False, inplace=False) 'Forested' version of ``subtree_reconfigure`` which is more explorative and can be parallelized. It stochastically generates a 'forest' reconfigured trees, then only keeps some fraction of these to generate the next forest. :param num_trees: The number of trees to reconfigure at each stage. :type num_trees: int, optional :param num_restarts: The number of times to halt, prune and then restart the tree reconfigurations. :type num_restarts: int, optional :param restart_fraction: The fraction of trees to keep at each stage and generate the next forest from. :type restart_fraction: float, optional :param subtree_maxiter: Number of subtree reconfigurations per step. ``num_restarts * subtree_maxiter`` is the max number of total subtree reconfigurations for the final tree produced. :type subtree_maxiter: int, optional :param subtree_size: The size of subtrees to search for and reconfigure. :type subtree_size: int, optional :param subtree_search: Tuple of options for the ``search`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_search: tuple[{'random', 'bfs', 'dfs'}], optional :param subtree_select: Tuple of options for the ``select`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_select: tuple[{'random', 'max', 'min'}], optional :param subtree_weight_what: Tuple of options for the ``weight_what`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_weight_what: tuple[{'flops', 'size'}], optional :param subtree_weight_pwr: Tuple of options for the ``weight_pwr`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_weight_pwr: tuple[int], optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param parallel_maxiter_steps: If parallelizing, how many steps to break each reconfiguration into in order to evenly saturate many processes. :type parallel_maxiter_steps: int, optional :param minimize: Whether to minimize the total flops or maximum size of the contraction tree. :type minimize: {'flops', 'size', ..., Objective}, optional :param seed: A random seed to use. :type seed: None, int or random.Random, optional :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the subtree reconfiguration inplace. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: slice(target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, seed=None, inplace=False) Slice this tree (turn some indices into indices which are explicitly summed over rather than being part of contractions). The indices are stored in ``tree.sliced_inds``, and the contraction width updated to take account of the slicing. Calling ``tree.contract(arrays)`` moreover which automatically perform the slicing and summation. :param target_size: The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached. :type target_size: int, optional :param target_slices: The target or minimum number of 'slices' to consider - individual contractions after slicing indices. The search algorithm will terminate after this is breached. This is on top of the current number of slices. :type target_slices: int, optional :param target_overhead: The target increase in total number of floating point operations. For example, a value of ``2.0`` will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once. :type target_overhead: float, optional :param temperature: How much to randomize the repeated search. :type temperature: float, optional :param minimize: Which metric to score the overhead increase against. :type minimize: {'flops', 'size', ..., Objective}, optional :param allow_outer: Whether to allow slicing of outer indices. :type allow_outer: bool, optional :param max_repeats: How many times to repeat the search with a slight randomization. :type max_repeats: int, optional :param reslice: Whether to reslice the tree, i.e. first remove all currently sliced indices and start the search again. Generally any 'good' sliced indices will be easily found again. :type reslice: bool, optional :param seed: A random seed or generator to use for the search. :type seed: None, int or random.Random, optional :param inplace: Whether the remove the indices from this tree inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. seealso:: :obj:`SliceFinder`, :obj:`ContractionTree.slice_and_reconfigure` .. py:method:: slice_and_reconfigure(target_size, step_size=2, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, reconf_opts=None, progbar=False, inplace=False) Interleave slicing (removing indices into an exterior sum) with subtree reconfiguration to minimize the overhead induced by this slicing. :param target_size: Slice the tree until the maximum intermediate size is this or smaller. :type target_size: int :param step_size: The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration. :type step_size: int, optional :param temperature: The temperature to supply to ``SliceFinder`` for searching for indices. :type temperature: float, optional :param minimize: The metric to minimize when slicing and reconfiguring subtrees. :type minimize: {'flops', 'size', ..., Objective}, optional :param max_repeats: The number of slicing attempts to perform per search. :type max_repeats: int, optional :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the slicing and reconfiguration inplace. :type inplace: bool, optional :param reconf_opts: Supplied to :meth:`ContractionTree.subtree_reconfigure` or :meth:`ContractionTree.subtree_reconfigure_forest`, depending on `'forested'` key value. :type reconf_opts: None or dict, optional .. py:method:: slice_and_reconfigure_forest(target_size, step_size=2, num_trees=8, restart_fraction=0.5, temperature=0.02, max_repeats=32, reslice=False, minimize=None, allow_outer=True, parallel='auto', progbar=False, inplace=False, reconf_opts=None) 'Forested' version of :meth:`ContractionTree.slice_and_reconfigure`. This maintains a 'forest' of trees with different slicing and subtree reconfiguration attempts, pruning the worst at each step and generating a new forest from the best. :param target_size: Slice the tree until the maximum intermediate size is this or smaller. :type target_size: int :param step_size: The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration. :type step_size: int, optional :param num_restarts: The number of times to halt, prune and then restart the tree reconfigurations. :type num_restarts: int, optional :param restart_fraction: The fraction of trees to keep at each stage and generate the next forest from. :type restart_fraction: float, optional :param temperature: The temperature at which to randomize the sliced index search. :type temperature: float, optional :param max_repeats: The number of slicing attempts to perform per search. :type max_repeats: int, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the slicing and reconfiguration inplace. :type inplace: bool, optional :param reconf_opts: Supplied to :meth:`ContractionTree.slice_and_reconfigure`. :type reconf_opts: None or dict, optional :rtype: ContractionTree .. py:method:: compressed_reconfigure(minimize=None, order_only=False, max_nodes='auto', max_time=None, local_score=None, exploration_power=0, best_score=None, progbar=False, inplace=False) Reconfigure this tree according to ``peak_size_compressed``. :param chi: The maximum bond dimension to consider. :type chi: int :param order_only: Whether to only consider the ordering of the current tree contractions, or all possible contractions, starting with the current. :type order_only: bool, optional :param max_nodes: Set the maximum number of contraction steps to consider. :type max_nodes: int, optional :param max_time: Set the maximum time to spend on the search. :type max_time: float, optional :param local_score: A function that assigns a score to a potential contraction, with a lower score giving more priority to explore that contraction earlier. It should have signature:: local_score(step, new_score, dsize, new_size) where ``step`` is the number of steps so far, ``new_score`` is the score of the contraction so far, ``dsize`` is the change in memory by the current step, and ``new_size`` is the new memory size after contraction. :type local_score: callable, optional :param exploration_power: If not ``0.0``, the inverse power to which the step is raised in the default local score function. Higher values favor exploring more promising branches early on - at the cost of increased memory. Ignored if ``local_score`` is supplied. :type exploration_power: float, optional :param best_score: Manually specify an upper bound for best score found so far. :type best_score: float, optional :param progbar: If ``True``, display a progress bar. :type progbar: bool, optional :param inplace: Whether to perform the reconfiguration inplace on this tree. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: windowed_reconfigure(minimize=None, order_only=False, window_size=20, max_iterations=100, max_window_tries=1000, score_temperature=0.0, queue_temperature=1.0, scorer=None, queue_scorer=None, seed=None, inplace=False, progbar=False, **kwargs) .. py:method:: flat_tree(order=None) Create a nested tuple representation of the contraction tree like:: ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) Such that the contraction will progress like:: ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) ((0, 12), (34, ((5, 67), 89))) (012, (34, (567, 89))) (012, (34, 56789)) (012, 3456789) 0123456789 Where each integer represents a leaf (i.e. single element node). .. py:method:: get_leaves_ordered() Return the list of leaves as ordered by the contraction tree. :rtype: tuple[frozenset[str]] .. py:method:: get_path(order=None) Generate a standard path from the contraction tree. .. py:method:: get_numpy_path(order=None) Generate a path compatible with the `optimize` kwarg of `numpy.einsum`. .. py:method:: get_ssa_path(order=None) Generate a ssa path from the contraction tree. .. py:method:: surface_order(node) .. py:method:: set_surface_order_from_path(ssa_path) .. py:method:: get_path_surface() .. py:method:: get_ssa_path_surface() .. py:method:: get_spans() Get all (which could mean none) potential embeddings of this contraction tree into a spanning tree of the original graph. :rtype: tuple[dict[frozenset[int], frozenset[int]]] .. py:method:: compute_centralities(combine='mean') Compute a centrality for every node in this contraction tree. .. py:method:: get_hypergraph(accel=False) Get a hypergraph representing the uncontracted network (i.e. the leaves). .. py:method:: reset_contraction_indices() Reset all information regarding the explicit contraction indices ordering. .. py:method:: sort_contraction_indices(priority='flops', make_output_contig=True, make_contracted_contig=True, reset=True) Set explicit orders for the contraction indices of this self to optimize for one of two things: contiguity in contracted ('k') indices, or contiguity of left and right output ('m' and 'n') indices. :param priority: Which order to process the intermediate nodes in. Later nodes re-sort previous nodes so are more likely to keep their ordering. E.g. for 'flops' the mostly costly contracton will be process last and thus will be guaranteed to have its indices exactly sorted. :type priority: {'flops', 'size', 'root', 'leaves'}, optional :param make_output_contig: When processing a pairwise contraction, sort the parent contraction indices so that the order of indices is the order they appear from left to right in the two child (input) tensors. :type make_output_contig: bool, optional :param make_contracted_contig: When processing a pairwise contraction, sort the child (input) tensor indices so that all contracted indices appear contiguously. :type make_contracted_contig: bool, optional :param reset: Reset all indices to the default order before sorting. :type reset: bool, optional .. py:method:: print_contractions(sort=None, show_brackets=True) Print each pairwise contraction, with colorized indices (if `colorama` is installed), and other information. .. py:method:: get_contractor(order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, implementation=None, autojit=False, progbar=False) Get a reusable function which performs the contraction corresponding to this tree, cached. :param tree: The contraction tree. :type tree: ContractionTree :param order: Supplied to :meth:`ContractionTree.traverse`, the order in which to perform the pairwise contractions given by the tree. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, the function will strip the exponent from the output array and return it separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param progbar: Whether to show progress through the contraction by default. :type progbar: bool, optional :returns: **fn** -- The contraction function, with signature ``fn(*arrays)``. :rtype: callable .. py:method:: contract_core(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False) Contract ``arrays`` with this tree. The order of the axes and output is assumed to be that of ``tree.inputs`` and ``tree.output``, but with sliced indices removed. This functon contracts the core tree and thus if indices have been sliced the arrays supplied need to be sliced as well. :param arrays: The arrays to contract. :type arrays: sequence of array :param order: Supplied to :meth:`ContractionTree.traverse`. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param backend: What library to use for ``einsum`` and ``transpose``, will be automatically inferred from the arrays if not given. :type backend: str, optional :param autojit: Whether to use ``autoray.autojit`` to jit compile the expression. If "auto", then let ``cotengra`` choose. :type autojit: "auto" or bool, optional :param progbar: Show progress through the contraction. :type progbar: bool, optional .. py:method:: slice_key(i, strides=None) Get the combination of sliced index values for overall slice ``i``. :param i: The overall slice index. :type i: int :returns: **key** -- The value each sliced index takes for slice ``i``. :rtype: dict[str, int] .. py:method:: slice_arrays(arrays, i) Take ``arrays`` and slice the relevant inputs according to ``tree.sliced_inds`` and the dynary representation of ``i``. .. py:method:: contract_slice(arrays, i, **kwargs) Get slices ``i`` of ``arrays`` and then contract them. .. py:method:: gather_slices(slices, backend=None, progbar=False) Gather all the output contracted slices into a single full result. If none of the sliced indices appear in the output, then this is a simple sum - otherwise the slices need to be partially summed and partially stacked. .. py:method:: gen_output_chunks(arrays, with_key=False, progbar=False, **contract_opts) Generate each output chunk of the contraction - i.e. take care of summing internally sliced indices only first. This assumes that the ``sliced_inds`` are sorted by whether they appear in the output or not (the default order). Useful for performing some kind of reduction over the final tensor object like ``fn(x).sum()`` without constructing the entire thing. :param arrays: The arrays to contract. :type arrays: sequence of array :param with_key: Whether to yield the output index configuration key along with the chunk. :type with_key: bool, optional :param progbar: Show progress through the contraction chunks. :type progbar: bool, optional :Yields: * **chunk** (*array*) -- A chunk of the contracted result. * **key** (*dict[str, int]*) -- The value each sliced output index takes for this chunk. .. py:method:: contract(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False) Contract ``arrays`` with this tree. This function takes *unsliced* arrays and handles the slicing, contractions and gathering. The order of the axes and output is assumed to match that of ``tree.inputs`` and ``tree.output``. :param arrays: The arrays to contract. :type arrays: sequence of array :param order: Supplied to :meth:`ContractionTree.traverse`. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, eagerly strip the exponent (in log10) from intermediate tensors to control numerical problems from leaving the range of the datatype. This method then returns the scaled 'mantissa' output array and the exponent separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param backend: What library to use for ``tensordot``, ``einsum`` and ``transpose``, it will be automatically inferred from the input arrays if not given. :type backend: str, optional :param autojit: Whether to use the 'autojit' feature of `autoray` to compile the contraction expression. :type autojit: bool, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :returns: * **output** (*array*) -- The contracted output, it will be scaled if ``strip_exponent==True``. * **exponent** (*float*) -- The exponent of the output in base 10, returned only if ``strip_exponent==True``. .. seealso:: :obj:`contract_core`, :obj:`contract_slice`, :obj:`slice_arrays`, :obj:`gather_slices` .. py:method:: contract_mpi(arrays, comm=None, root=None, **kwargs) Contract the slices of this tree and sum them in parallel - *assuming* we are already running under MPI. :param arrays: The input (unsliced arrays) :type arrays: sequence of array :param comm: Defaults to ``mpi4py.MPI.COMM_WORLD`` if not given. :type comm: None or mpi4py communicator :param root: If ``root=None``, an ``Allreduce`` will be performed such that every process has the resulting tensor, else if an integer e.g. ``root=0``, the result will be exclusively gathered to that process using ``Reduce``, with every other process returning ``None``. :type root: None or int, optional :param kwargs: Supplied to :meth:`~cotengra.ContractionTree.contract_slice`. .. py:method:: benchmark(dtype, max_time=60, min_reps=3, max_reps=100, warmup=True, **contract_opts) Benchmark the contraction of this tree. :param dtype: The datatype to use. :type dtype: {"float32", "float64", "complex64", "complex128"} :param max_time: The maximum time to spend benchmarking in seconds. :type max_time: float, optional :param min_reps: The minimum number of repetitions to perform, regardless of time. :type min_reps: int, optional :param max_reps: The maximum number of repetitions to perform, regardless of time. :type max_reps: int, optional :param warmup: Whether to perform a warmup run before the benchmark. If an int, the number of warmup runs to perform. :type warmup: bool or int, optional :param contract_opts: Supplied to :meth:`~cotengra.ContractionTree.contract_slice`. :returns: A dictionary of benchmarking results. The keys are: - "time_per_slice" : float The average time to contract a single slice. - "est_time_total" : float The estimated total time to contract all slices. - "est_gigaflops" : float The estimated gigaflops of the contraction. :rtype: dict .. seealso:: :obj:`contract_slice` .. py:method:: plot_hypergraph(**kwargs) .. py:method:: describe(info='normal', join=' ') Return a string describing the contraction tree. .. py:method:: __repr__() Return repr(self). .. py:method:: __str__() Return str(self). .. py:class:: ContractionTreeCompressed(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None) Bases: :py:obj:`ContractionTree` A contraction tree for compressed contractions. Currently the only difference is that this defaults to the 'surface' traversal ordering. .. py:attribute:: total_flops .. py:attribute:: total_write .. py:attribute:: combo_cost .. py:attribute:: total_cost .. py:attribute:: max_size .. py:attribute:: peak_size .. py:attribute:: contraction_cost .. py:attribute:: contraction_width .. py:attribute:: total_flops_exact .. py:attribute:: total_write_exact .. py:attribute:: combo_cost_exact .. py:attribute:: total_cost_exact .. py:attribute:: max_size_exact .. py:attribute:: peak_size_exact .. py:method:: from_path(inputs, output, size_dict, *, path=None, ssa_path=None, check=False, **kwargs) :classmethod: Create a (completed) ``ContractionTreeCompressed`` from the usual inputs plus a standard contraction path or 'ssa_path' - you need to supply one. This also set the default 'surface' traversal ordering to be the initial path. .. py:method:: get_default_order() .. py:method:: get_default_objective() Get the objective function for this tree. .. py:method:: get_default_chi() .. py:method:: get_default_compress_late() .. py:method:: get_contractor(*_, **__) :abstractmethod: Get a reusable function which performs the contraction corresponding to this tree, cached. :param tree: The contraction tree. :type tree: ContractionTree :param order: Supplied to :meth:`ContractionTree.traverse`, the order in which to perform the pairwise contractions given by the tree. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, the function will strip the exponent from the output array and return it separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param progbar: Whether to show progress through the contraction by default. :type progbar: bool, optional :returns: **fn** -- The contraction function, with signature ``fn(*arrays)``. :rtype: callable .. py:class:: ContractionTreeMulti(inputs, output, size_dict, sliced_inds, objective, track_cache=False) Bases: :py:obj:`cotengra.core.ContractionTree` Binary tree representing a tensor network contraction. :param inputs: The list of input tensor's indices. :type inputs: sequence of str :param output: The output indices. :type output: str :param size_dict: The size of each index. :type size_dict: dict[str, int] :param track_childless: Whether to dynamically keep track of which nodes are childless. Useful if you are 'divisively' building the tree. :type track_childless: bool, optional :param track_flops: Whether to dynamically keep track of the total number of flops. If ``False`` You can still compute this once the tree is complete. :type track_flops: bool, optional :param track_write: Whether to dynamically keep track of the total number of elements written. If ``False`` You can still compute this once the tree is complete. :type track_write: bool, optional :param track_size: Whether to dynamically keep track of the largest tensor so far. If ``False`` You can still compute this once the tree is complete. :type track_size: bool, optional :param objective: An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed. :type objective: str or Objective, optional .. attribute:: children Mapping of each node to two children. :type: dict[node, tuple[node]] .. attribute:: info Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective 'leg' indices, size, flops of formation etc. :type: dict[node, dict] .. py:method:: set_state_from(other) Set the internal state of this tree to that of ``other``. .. py:method:: _remove_node(node) Remove ``node`` from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing. .. py:method:: _update_tracked(node) .. py:method:: get_node_var_inds(node) Get the set of variable indices that a node depends on. .. py:method:: get_node_is_bright(node) Get whether a node is 'bright', i.e. contains a different set of variable indices to either of its children, if a node is not bright then its children never have to be stored in the cache. .. py:method:: get_node_mult(node) Get the estimated 'multiplicity' of a node, i.e. the number of times it will have to be recomputed for different index configurations. .. py:method:: get_node_cache_mult(node, sliced_ind_ordering) Get the estimated 'cache multiplicity' of a node, i.e. the total number of versions with different index configurations that must be stored simultaneously in the cache. .. py:method:: get_flops(node) The the estimated total cost of computing a node for all index configurations. .. py:method:: get_cache_contrib(node) .. py:method:: peak_size(log=None) Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree. .. py:method:: reorder_contractions_for_peak_est() Reorder the contractions to try and reduce the peak memory usage. .. py:method:: reorder_sliced_inds() .. py:method:: exact_multi_stats(configs) .. py:class:: HyperGraph(inputs, output=None, size_dict=None) Simple hypergraph builder and writer. :param inputs: The nodes. If given as a dict, the keys will be taken as the node enumeration rather than ``range(len(inputs))``. :type inputs: sequence of list[str] or dict[int, list[str]] :param output: Output indices. :type output: str, optional :param size_dict: Size of each index. :type size_dict: dict[str, int], optional .. attribute:: nodes Mapping of node to the list of edges incident to it. :type: dict[int, list[str]] .. attribute:: edges Mapping of hyper edges to list of nodes it is incident to. :type: dict[str, list[int]] .. attribute:: num_nodes The number of nodes. :type: int .. attribute:: num_edges The number of hyper-edges. :type: int .. py:property:: num_nodes .. py:property:: num_edges .. py:attribute:: __slots__ :value: ('inputs', 'output', 'size_dict', 'nodes', 'edges', 'node_counter') .. py:attribute:: plot .. py:method:: copy() Copy this ``HyperGraph``. .. py:method:: from_edges(edges, output=(), size_dict=()) :classmethod: .. py:method:: get_num_nodes() .. py:method:: get_num_edges() .. py:method:: __len__() .. py:method:: edges_size(es) Get the combined, i.e. product, size of all edges in ``es``. .. py:method:: bond_size(i, j) Get the combined, i.e. product, size of edges shared by nodes ``i`` and ``j``. .. py:method:: node_size(i) Get the size of the term represented by node ``i``. .. py:method:: neighborhood_size(nodes) Get the size of nodes in the immediate neighborhood of ``nodes``. .. py:method:: contract_pair_cost(i, j) Get the cost of contracting nodes ``i`` and ``j`` - the product of the dimensions of the indices involved. .. py:method:: neighborhood_compress_cost(chi, nodes) .. py:method:: total_node_size() Get the total size of all nodes. .. py:method:: output_nodes() Get the nodes with output indices. .. py:method:: neighbors(i) Get the neighbors of node ``i``. .. py:method:: neighbor_edges(i) Get the edges incident to all neighbors of node ``i``, (including its own edges). .. py:method:: has_node(i) Does this hypergraph have node ``i``? .. py:method:: get_node(i) Get the edges node ``i`` is incident to. .. py:method:: get_edge(e) Get the nodes edge ``e`` is incident to. .. py:method:: has_edge(e) Does this hypergraph have edge ``e``? .. py:method:: next_node() Get the next available node identifier. .. py:method:: add_node(inds, node=None) Add a node with ``inds``, and optional identifier ``node``. The identifier will be generated if not given and returned. .. py:method:: remove_node(i) Remove node ``i`` from this hypergraph. .. py:method:: remove_edge(e) Remove edge ``e`` from this hypergraph. .. py:method:: contract(i, j, node=None) Combine node ``i`` and node ``j``. .. py:method:: compress(chi, edges=None) 'Compress' multiedges, combining their size up to a maximum of ``chi``. .. py:method:: compute_contracted_inds(nodes) Generate the output indices if one were to contract ``nodes``. .. py:method:: candidate_contraction_size(i, j, chi=None) Get the size of the node created if ``i`` and ``j`` were contracted, optionally including the effect of first compressing bonds to size ``chi``. .. py:method:: all_shortest_distances(nodes=None) .. py:method:: all_shortest_distances_condensed(nodes=None) .. py:method:: simple_distance(region, p=2) Compute a simple distance metric from nodes in ``region`` to all others. Unlike graph distance, relative connectedness is taken into account. .. py:method:: simple_closeness(p=0.75, mu=0.5) Compute a rough hypergraph 'closeness'. :param p: Once any node has had ``H.num_nodes**p`` visitors terminate. Set greater than 1.0 for no limit (slower). :type p: float, optional :param mu: Let the visitor score decay with this power. The higher this is, the more local connectivity is favored. :type mu: float, optional :returns: **scores** -- The simple hypergraph closenesses - higher being more central. :rtype: dict[int, float] .. py:method:: simple_centrality(r=None, smoothness=2, **closeness_opts) A simple algorithm for large hypergraph centrality. First we find a rough closeness centrality, then relax / smooth this by nodes iteratively radiating their centrality to their neighbors. :param r: Number of iterations. Defaults to ``max(10, int(self.num_nodes**0.5))``. :type r: None or int, optional :param smoothness: The smoothness. In conjunction with a high value of ``r`` this will create a smooth gradient from one of the hypergraph to the other. :type smoothness: float, optional :param closeness_opts: Supplied to ``HyperGraph.simple_closeness`` as the starting point. :rtype: dict[int, float] .. py:method:: compute_loops(start=None, max_loop_length=None) Generate all loops up to a certain length in this hypergraph. :param start: Only generate loops including these nodes, defaults to all. :type start: sequence of int, optional :param max_loop_length: The maximum loop length to search for. If ``None``, then this is set automatically by the length of the first loop found. :type max_loop_length: None or int, optional :Yields: **loop** (*tuple[int]*) -- A set of nodes that form a loop. .. py:method:: get_laplacian() Get the graph Laplacian. .. py:method:: get_resistance_distances() Get the resistance distance between all nodes of the raw graph. .. py:method:: resistance_centrality(rescale=True) Compute the centrality in terms of the total resistance distance to all other nodes. .. py:method:: to_networkx(as_tree_leaves=False) Convert to a networkx Graph, with hyperedges represented as nodes. :param as_tree_leaves: If true, then the nodes are converted to 'tree leaf' form, i.e. map node ``i`` to ``frozenset([i])``, to match the nodes in a ``ContractionTree``. :type as_tree_leaves: bool, optional .. py:method:: compute_weights(weight_edges='const', weight_nodes='const') .. py:method:: __repr__() Return repr(self). .. py:function:: get_hypergraph(inputs, output=None, size_dict=None, accel=False) Single entry-point for creating a, possibly accelerated, HyperGraph. .. py:class:: HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs) Bases: :py:obj:`HyperOptimizer` A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them. :param chi: The maximum bond dimension to compress to. If ``None`` then use the square of the largest existing dimension. If ``minimize`` is specified as a full scoring function, this is ignored. :type chi: None or int, optional :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:attribute:: compressed :value: True .. py:attribute:: multicontraction :value: False .. py:class:: HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts) Bases: :py:obj:`HyperOptimizer` A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them. :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param on_trial_error: What to do if a trial fails. If ``'warn'`` (default), a warning will be printed and the trial will be given a score of ``inf``. If ``'raise'`` the error will be raised. If ``'ignore'`` the trial will be given a score of ``inf`` silently. :type on_trial_error: {'warn', 'raise', 'ignore'}, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:attribute:: compressed :value: False .. py:attribute:: multicontraction :value: True .. py:class:: HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts) Bases: :py:obj:`cotengra.oe.PathOptimizer` A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them. :param methods: Which method(s) to use from ``list_hyper_functions()``. :type methods: None or sequence[str] or str, optional :param minimize: How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a ``trial`` dict as its argument and return a single float. :type minimize: str, Objective or callable, optional :param max_repeats: The maximum number of trial contraction trees to generate. Default: 128. :type max_repeats: int, optional :param max_time: The maximum amount of time to run for. Use ``None`` for no limit. You can also set an estimated execution 'rate' here like ``'rate:1e9'`` that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions. :type max_time: None or float, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param slicing_opts: If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions. :type slicing_opts: dict, optional :param slicing_reconf_opts: If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions. :type slicing_reconf_opts: dict, optional :param reconf_opts: If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions. :type reconf_opts: dict, optional :param optlib: Which optimizer to sample and train with. :type optlib: {'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional :param space: The hyper space to search, see ``get_hyper_space`` for the default. :type space: dict, optional :param score_compression: Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing. :type score_compression: float, optional :param on_trial_error: What to do if a trial fails. If ``'warn'`` (default), a warning will be printed and the trial will be given a score of ``inf``. If ``'raise'`` the error will be raised. If ``'ignore'`` the trial will be given a score of ``inf`` silently. :type on_trial_error: {'warn', 'raise', 'ignore'}, optional :param max_training_steps: The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes). :type max_training_steps: int, optional :param progbar: Show live progress of the best contraction found so far. :type progbar: bool, optional :param optlib_opts: Supplied to the hyper-optimizer library initialization. .. py:property:: minimize .. py:property:: parallel .. py:property:: tree .. py:property:: path .. py:attribute:: compressed :value: False .. py:attribute:: multicontraction :value: False .. py:attribute:: plot_trials .. py:attribute:: plot_trials_alt .. py:attribute:: plot_scatter .. py:attribute:: plot_scatter_alt .. py:method:: setup(inputs, output, size_dict) .. py:method:: _maybe_cancel_futures() .. py:method:: _maybe_report_result(setting, trial) .. py:method:: _gen_results(repeats, trial_fn, trial_args) .. py:method:: _get_and_report_next_future() .. py:method:: _gen_results_parallel(repeats, trial_fn, trial_args) .. py:method:: _search(inputs, output, size_dict) .. py:method:: search(inputs, output, size_dict) Run this optimizer and return the ``ContractionTree`` for the best path it finds. .. py:method:: get_tree() Return the ``ContractionTree`` for the best path found. .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) ``opt_einsum`` interface, returns direct ``path``. .. py:method:: get_trials(sort=None) .. py:method:: print_trials(sort=None) .. py:method:: to_df() Create a single ``pandas.DataFrame`` with all trials and scores. .. py:method:: to_dfs_parametrized() Create a ``pandas.DataFrame`` for each method, with all parameters and scores for each trial. .. py:class:: ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs) Bases: :py:obj:`ReusableHyperOptimizer` Like ``HyperCompressedOptimizer`` but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found. :param chi: The maximum bond dimension to compress to. If ``None`` then use the square of the largest existing dimension. If ``minimize`` is specified as a full scoring function, this is ignored. :type chi: None or int, optional :param directory: If specified use this directory as a persistent cache. If ``True`` auto generate a directory in the current working directory based on the options which are most likely to affect the path (see `ReusableHyperOptimizer.get_path_relevant_opts`). :type directory: None, True, or str, optional :param overwrite: If ``True``, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache. :type overwrite: bool, optional :param set_surface_order: If ``True``, when reloading a path to turn into a ``ContractionTree``, the 'surface order' of the path (used for compressed paths), will be set manually to the order the disk path is. :type set_surface_order: bool, optional :param hash_method: The method used to hash the contraction tree. The default, ``'a'``, is faster hashwise but doesn't recognize when indices are permuted. :type hash_method: {'a', 'b', ...}, optional :param cache_only: If ``True``, the optimizer will only use the cache, and will raise ``KeyError`` if a contraction is not found. :type cache_only: bool, optional :param opt_kwargs: Supplied to ``HyperCompressedOptimizer``. .. py:attribute:: suboptimizer .. py:attribute:: set_surface_order :value: True .. py:class:: ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs) Bases: :py:obj:`cotengra.oe.PathOptimizer` Like ``HyperOptimizer`` but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found. :param directory: If specified use this directory as a persistent cache. If ``True`` auto generate a directory in the current working directory based on the options which are most likely to affect the path (see `ReusableHyperOptimizer.get_path_relevant_opts`). :type directory: None, True, or str, optional :param overwrite: If ``True``, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache. :type overwrite: bool, optional :param set_surface_order: If ``True``, when reloading a path to turn into a ``ContractionTree``, the 'surface order' of the path (used for compressed paths), will be set manually to the order the disk path is. :type set_surface_order: bool, optional :param hash_method: The method used to hash the contraction tree. The default, ``'a'``, is faster hashwise but doesn't recognize when indices are permuted. :type hash_method: {'a', 'b', ...}, optional :param cache_only: If ``True``, the optimizer will only use the cache, and will raise ``KeyError`` if a contraction is not found. :type cache_only: bool, optional :param opt_kwargs: Supplied to ``HyperOptimizer``. .. py:property:: last_opt .. py:attribute:: suboptimizer .. py:attribute:: set_surface_order :value: False .. py:method:: get_path_relevant_opts() Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified. .. py:method:: auto_hash_path_relevant_opts() Automatically hash the path relevant options used to create the optimizer. .. py:method:: hash_query(inputs, output, size_dict) Hash the contraction specification, returning this and whether the contraction is already present as a tuple. .. py:method:: _compute_path(inputs, output, size_dict) .. py:method:: update_from_tree(tree, overwrite=True) Explicitly add the contraction that ``tree`` represents into the cache. For example, if you have manually improved it via reconfing. If ``overwrite=False`` and the contracton is present already then do nothing. .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) .. py:method:: search(inputs, output, size_dict) .. py:method:: cleanup() .. py:function:: get_hyper_space() .. py:function:: hash_contraction(inputs, output, size_dict, method='a') Compute a hash for a particular contraction geometry. .. py:function:: list_hyper_functions() Return a list of currently registered hyper contraction finders. .. py:function:: array_contract(arrays, inputs, output=None, optimize='auto', cache_expression=True, backend=None, **kwargs) Perform the tensor contraction specified by ``inputs``, ``output`` and ``size_dict``, using strategy given by ``optimize``. By default the path finding and expression building is cached, so that if the a matching contraction is performed multiple times the overhead is negated. :param arrays: The arrays to contract. :type arrays: Sequence[array_like] :param inputs: The inputs terms. :type inputs: Sequence[Sequence[Hashable]] :param output: The output term. :type output: Sequence[Hashable] :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. If the optimizer provides sliced indices they will be used. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param cache_expression: If ``True``, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable ``optimize`` objects. :type cache_expression: bool, optional :param backend: If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically. :type backend: str, optional :param kwargs: Passed to :func:`~cotengra.interface.array_contract_expression`. :rtype: array_like .. seealso:: :obj:`array_contract_expression`, :obj:`array_contract_tree`, :obj:`einsum` .. py:function:: array_contract_expression(inputs, output=None, size_dict=None, shapes=None, optimize='auto', constants=None, canonicalize=True, cache=True, **kwargs) Get an callable 'expression' that will contract tensors with indices and shapes described by ``inputs`` and ``size_dict`` to ``output``. The ``optimize`` kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true if ``optimize`` is an optimizer that can directly produce ``ContractionTree`` instances (i.e. has a ``.search()`` method). :param inputs: The inputs terms. :type inputs: Sequence[Sequence[Hashable]] :param output: The output term. :type output: Sequence[Hashable] :param size_dict: The size of each index. :type size_dict: dict[Hashable, int] :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. If the optimizer provides sliced indices they will be used. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param constants: A mapping of constant input positions to constant arrays. If given, the final expression will take only the remaining non-constant tensors as inputs. Note this is a different format to the ``constants`` kwarg of :func:`einsum_expression` since it also provides the constant arrays. :type constants: dict[int, array_like], optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param via: If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back. :type via: tuple[callable, callable], optional :param sort_contraction_indices: If ``True``, call ``tree.sort_contraction_indices()`` before constructing the contraction function. :type sort_contraction_indices: bool, optional :param cache: If ``True``, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable ``optimize`` objects. :type cache: bool, optional :returns: **expr** -- A callable, signature ``expr(*arrays)`` that will contract ``arrays`` with shapes ``shapes``. :rtype: callable .. seealso:: :obj:`einsum_expression`, :obj:`array_contract`, :obj:`array_contract_tree` .. py:function:: array_contract_path(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, cache=True) Find only the contraction path for the specific contraction, with fast dispatch of ``optimize``, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer. The raw path is a more compact representation of the core tree structure but contains less information on its own, for example sliced indices are not included. :param inputs: The inputs terms. :type inputs: Sequence[Sequence[Hashable]] :param output: The output term. :type output: Sequence[Hashable], optional :param size_dict: The size of each index, if given, ``shapes`` is ignored. :type size_dict: dict[Hashable, int], optional :param shapes: The shape of each input array. Needed if ``size_dict`` not supplied. :type shapes: Sequence[tuple[int]], optional :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param canonicalize: If ``True``, canonicalize the inputs and output so that the indices are relabelled ``'a', 'b', 'c', ...``, etc. in the order they appear. :type canonicalize: bool, optional :param cache: If ``True``, cache the path for the contraction, so that if the same pathfinding is performed multiple times the overhead is negated. Only for hashable ``optimize`` objects. :type cache: bool, optional :returns: **path** -- The contraction path, whose interpretation is thus: the input tensors are assumed to be stored in a list, i.e. indexed by ``range(N)``. Each contraction in the path is a set of indices, the tensors at these locations should be *popped* from the list and then the result of the contraction *appended*. :rtype: tuple[tuple[int]] .. py:function:: array_contract_tree(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, sort_contraction_indices=False) Get the ``ContractionTree`` for the tensor contraction specified by ``inputs``, ``output`` and ``size_dict``, with optimization strategy given by ``optimize``. The tree can be used to inspect and also perform the contraction. :param inputs: The inputs terms. :type inputs: Sequence[Sequence[Hashable]] :param output: The output term. :type output: Sequence[Hashable], optional :param size_dict: The size of each index, if given, ``shapes`` is ignored. :type size_dict: dict[Hashable, int], optional :param shapes: The shape of each input array. Needed if ``size_dict`` not supplied. :type shapes: Sequence[tuple[int]], optional :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param canonicalize: If ``True``, canonicalize the inputs and output so that the indices are relabelled ``'a', 'b', 'c', ...``, etc. in the order they appear. :type canonicalize: bool, optional :param sort_contraction_indices: If ``True``, call ``tree.sort_contraction_indices()``. :type sort_contraction_indices: bool, optional :rtype: ContractionTree .. seealso:: :obj:`array_contract`, :obj:`array_contract_expression`, :obj:`einsum_tree` .. py:function:: einsum(*args, optimize='auto', cache_expression=True, backend=None, **kwargs) Perform an einsum contraction, using `cotengra`, using strategy given by ``optimize``. By default the path finding and expression building is cached, so that if a matching contraction is performed multiple times the overhead is negated. :param eq: The equation to use for contraction, for example ``'ab,bc->ac'``. :type eq: str :param arrays: The arrays to contract. :type arrays: Sequence[array_like] :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. If the optimizer provides sliced indices they will be used. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param cache_expression: If ``True``, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable ``optimize`` objects. :type cache_expression: bool, optional :param backend: If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically. :type backend: str, optional :param kwargs: Passed to :func:`~cotengra.interface.array_contract_expression`. :rtype: array_like .. seealso:: :obj:`einsum_expression`, :obj:`einsum_tree`, :obj:`array_contract` .. py:function:: einsum_expression(*args, optimize='auto', constants=None, cache=True, **kwargs) Get an callable 'expression' that will contract tensors with shapes ``shapes`` according to equation ``eq``. The ``optimize`` kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true if ``optimize`` is an optimizer that can directly produce ``ContractionTree`` instances (i.e. has a ``.search()`` method). :param eq: The equation to use for contraction, for example ``'ab,bc->ac'``. The output will be automatically computed if not supplied, but Ellipses (`'...'`) are not supported. :type eq: str :param shapes: The shapes of the tensors to contract, or the constant tensor itself if marked as constant in ``constants``. :type shapes: Sequence[tuple[int]] :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. If the optimizer provides sliced indices they will be used. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param constants: The indices of tensors to treat as constant, the final expression will take the remaining non-constant tensors as inputs. Note this is a different format to the ``constants`` kwarg of :func:`array_contract_expression` since the actual constant arrays are inserted into ``shapes``. :type constants: Sequence of int, optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param via: If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back. :type via: tuple[callable, callable], optional :param sort_contraction_indices: If ``True``, call ``tree.sort_contraction_indices()`` before constructing the contraction function. :type sort_contraction_indices: bool, optional :param cache: If ``True``, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable ``optimize`` objects. :type cache: bool, optional :returns: **expr** -- A callable, signature ``expr(*arrays)`` that will contract ``arrays`` with shapes matching ``shapes``. :rtype: callable .. seealso:: :obj:`einsum`, :obj:`einsum_tree`, :obj:`array_contract_expression` .. py:function:: einsum_tree(*args, optimize='auto', canonicalize=False, sort_contraction_indices=False) Get the `ContractionTree` for the einsum equation ``eq`` and optimization strategy ``optimize``. The tree can be used to inspect and also perform the contraction. :param eq: The equation to use for contraction, for example ``'ab,bc->ac'``. :type eq: str :param shapes: The shape of each input array. :type shapes: Sequence[tuple[int]] :param optimize: The optimization strategy to use. This can be: - A string preset, e.g. ``'auto'``, ``'greedy'``, ``'optimal'``. - A ``PathOptimizer`` instance. - An explicit path, e.g. ``[(0, 1), (2, 3), ...]``. - An explicit ``ContractionTree`` instance. :type optimize: str, path_like, PathOptimizer, or ContractionTree :param canonicalize: If ``True``, canonicalize the inputs and output so that the indices are relabelled ``'a', 'b', 'c', ...``, etc. in the order they appear. :type canonicalize: bool, optional :param sort_contraction_indices: If ``True``, call ``tree.sort_contraction_indices()``. :type sort_contraction_indices: bool, optional :rtype: ContractionTree .. seealso:: :obj:`einsum`, :obj:`einsum_expression`, :obj:`array_contract_tree` .. py:function:: register_preset(preset, optimizer, register_opt_einsum='auto', compressed=False) Register a preset optimizer. .. py:class:: GreedyOptimizer(costmod=1.0, temperature=0.0, simplify=True, accel='auto') Bases: :py:obj:`cotengra.oe.PathOptimizer` Class interface to the greedy optimizer which can be instantiated with default options. .. py:attribute:: __slots__ :value: ('costmod', 'temperature', 'simplify', '_optimize_fn') .. py:method:: maybe_update_defaults(**kwargs) .. py:method:: ssa_path(inputs, output, size_dict, **kwargs) .. py:method:: search(inputs, output, size_dict, **kwargs) .. py:method:: __call__(inputs, output, size_dict, **kwargs) .. py:class:: OptimalOptimizer(minimize='flops', cost_cap=2, search_outer=False, simplify=True, accel='auto') Bases: :py:obj:`cotengra.oe.PathOptimizer` Class interface to the optimal optimizer which can be instantiated with default options. .. py:attribute:: __slots__ :value: ('minimize', 'cost_cap', 'search_outer', 'simplify', '_optimize_fn') .. py:method:: maybe_update_defaults(**kwargs) .. py:method:: ssa_path(inputs, output, size_dict, **kwargs) .. py:method:: search(inputs, output, size_dict, **kwargs) .. py:method:: __call__(inputs, output, size_dict, **kwargs) .. py:class:: RandomGreedyOptimizer(max_repeats=32, costmod=(0.1, 4.0), temperature=(0.001, 1.0), seed=None, simplify=True, accel='auto', parallel='auto') Bases: :py:obj:`cotengra.oe.PathOptimizer` Lightweight random greedy optimizer, that eschews hyper parameter tuning and contraction tree construction. This is a stateful optimizer that should not be re-used on different contractions. :param max_repeats: The number of random greedy trials to perform. :type max_repeats: int, optional :param costmod: When assessing local greedy scores how much to weight the size of the tensors removed compared to the size of the tensor added:: score = size_ab / costmod - (size_a + size_b) * costmod It is sampled uniformly from the given range. :type costmod: (float, float), optional :param temperature: When asessing local greedy scores, how much to randomly perturb the score. This is implemented as:: score -> sign(score) * log(|score|) - temperature * gumbel() which implements boltzmann sampling. It is sampled log-uniformly from the given range. :type temperature: (float, float), optional :param seed: The seed for the random number generator. Note that deterministic behavior is only guaranteed within the python or rust backend (the `accel` parameter) and parallel settings. :type seed: int, optional :param simplify: Whether to perform simplifications before optimizing. These are: - ignore any indices that appear in all terms - combine any repeated indices within a single term - reduce any non-output indices that only appear on a single term - combine any scalar terms - combine any tensors with matching indices (hadamard products) Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form. :type simplify: bool, optional :param accel: Whether to use the accelerated `cotengrust` backend. If "auto" the backend is used if available. :type accel: bool or str, optional :param parallel: Whether to use parallel processing. If "auto" the default is to use threads if the accelerated backend is not used, and processes if it is. :type parallel: bool or str, optional .. attribute:: best_ssa_path The best contraction path found so far. :type: list[list[int]] .. attribute:: best_flops The flops (/ contraction cost / number of multiplications) of the best contraction path found so far. :type: float .. py:method:: maybe_update_defaults(**kwargs) .. py:method:: ssa_path(inputs, output, size_dict, **kwargs) .. py:method:: search(inputs, output, size_dict, **kwargs) .. py:method:: __call__(inputs, output, size_dict, **kwargs) .. py:class:: FlowCutterOptimizer(max_time=10, seed=None, executable='flow_cutter_pace17') Bases: :py:obj:`cotengra.oe.PathOptimizer` .. py:method:: run_flowcutter(file, max_time=None) .. py:method:: compute_edge_path(lg) .. py:method:: build_tree(inputs, output, size_dict, memory_limit=None) .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) .. py:function:: optimize_flowcutter(inputs, output, size_dict, memory_limit=None, max_time=10, seed=None) .. py:class:: QuickBBOptimizer(max_time=10, executable='quickbb_64', seed=None) Bases: :py:obj:`cotengra.oe.PathOptimizer` .. py:method:: run_quickbb(fname, outfile, statfile, max_time=None) .. py:method:: build_tree(inputs, output, size_dict) .. py:method:: __call__(inputs, output, size_dict, memory_limit=None) .. py:function:: optimize_quickbb(inputs, output, size_dict, memory_limit=None, max_time=60, seed=None) .. py:function:: plot_contractions(tree, order=None, color_size=(0.6, 0.4, 0.7), color_cost=(0.3, 0.7, 0.5), figsize=(8, 3)) .. py:function:: plot_contractions_alt(tree, x='peak-size', y='flops', color='stage', size='scaling', width=400, height=400, point_opacity=0.8, color_scheme='lightmulti', x_scale='log', y_scale='log', color_scale='log', size_scale='linear') .. py:function:: plot_scatter(self, x='size', y='flops', cumulative_time=False, plot_best=False, figsize=(5, 5)) .. py:function:: plot_scatter_alt(self, x='size', y='flops', color='run:Q', color_scheme='purplebluegreen', shape='method:N', width=400, height=400) Plot the trials total flops vs max size. .. py:function:: plot_slicings(slice_finder, color_scheme='RdYlBu_r', relative_flops=False, figsize=(6, 3), point_opacity=0.8) .. py:function:: plot_slicings_alt(slice_finder, color_scheme='redyellowblue', relative_flops=False) .. py:function:: plot_tree(tree, layout='ring', hypergraph_layout='auto', hypergraph_layout_opts=None, k=0.01, iterations=500, span=None, order=None, order_y_pow=1.0, edge_scale=1.0, node_scale=1.0, highlight=(), edge_color='size', edge_colormap='GnBu', edge_max_width=None, node_colormap='YlOrRd', node_color='flops', node_max_size=None, figsize=(5, 5), raw_edge_color=None, raw_edge_alpha=None, tree_root_height=True, tree_alpha=0.8, colorbars=True, plot_raw_graph=True, plot_leaf_labels=False, ax=None) Plot a contraction tree using matplotlib. .. py:function:: plot_tree_ring(tree, **kwargs) .. py:function:: plot_tree_span(tree, **kwargs) .. py:function:: plot_tree_tent(tree, **kwargs) .. py:function:: plot_trials(self, *, x='trial', y='score', figsize=(8, 3), cumulative_time=True, plot_best=True, **kwargs) .. py:function:: plot_trials_alt(self, y=None, width=800, height=300) Plot the trials interactively using altair. .. py:class:: AutoHQOptimizer(**kwargs) Bases: :py:obj:`AutoOptimizer` An optimizer that automatically chooses between optimal and hyper-optimization, designed for everyday use on harder contractions or those that will be repeated many times, and thus warrant a more extensive search. .. py:class:: AutoOptimizer(optimal_cutoff=250, minimize='combo', cache=True, **hyperoptimizer_kwargs) Bases: :py:obj:`cotengra.oe.PathOptimizer` An optimizer that automatically chooses between optimal and hyper-optimization, designed for everyday use. .. py:method:: _get_optimizer_hyper_threadsafe() .. py:method:: search(inputs, output, size_dict, **kwargs) .. py:method:: __call__(inputs, output, size_dict, **kwargs) .. py:data:: auto_hq_optimize .. py:data:: auto_optimize .. py:data:: greedy_optimize .. py:data:: optimal_optimize .. py:data:: optimal_outer_optimize .. py:class:: SliceFinder(tree_or_info, target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize='flops', allow_outer=True, seed=None) An object to help find the best indices to slice over in order to reduce the memory footprint of a contraction as much as possible whilst introducing as little extra overhead. It searches for and stores ``ContractionCosts``. :param tree_or_info: Object describing the target full contraction to slice. :type tree_or_info: ContractionTree or opt_einsum.PathInfo :param target_size: The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached. :type target_size: int, optional :param target_slices: The target or minimum number of 'slices' to consider - individual contractions after slicing indices. The search algorithm will terminate after this is breached. This is on top of the current number of slices. :type target_slices: int, optional :param target_overhead: The target increase in total number of floating point operations. For example, a value of ``2.0`` will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once. :type target_overhead: float, optional :param temperature: When sampling combinations of indices, how far to randomly stray from what looks like the best (local) choice. :type temperature: float, optional .. py:attribute:: plot_slicings .. py:attribute:: plot_slicings_alt .. py:method:: _maybe_default(attr, value) .. py:method:: best(k=None, target_size=None, target_overhead=None, target_slices=None) Return the best contraction slicing, subject to target filters. .. py:method:: trial(target_size=None, target_overhead=None, target_slices=None, temperature=None) A single slicing attempt, greedily select indices from the popular pool, subject to the score function, terminating when any of the target criteria are met. .. py:method:: search(max_repeats=16, temperature=None, target_size=None, target_overhead=None, target_slices=None) Repeat trial several times and return the best found so far. .. py:function:: get_symbol(i) Get the symbol corresponding to int ``i`` - runs through the usual 52 letters before resorting to unicode characters, starting at ``chr(192)`` and skipping surrogates. .. rubric:: Examples get_symbol(2) #> 'c' get_symbol(200) #> 'Ŕ' get_symbol(20000) #> '京' .. py:function:: get_symbol_map(inputs) Get a mapping of arbitrary hashable 'indices' to single unicode symbols, matching the canonicalization of the expression. .. py:data:: UniformOptimizer Does no gaussian process tuning by default, just randomly samples - requires no optimization library. .. py:data:: QuasiRandOptimizer Does no gaussian process tuning by default, just randomly samples but in a more 'even' way than purely random - requires ``chocolate``. .. py:data:: contract_expression Alias for :func:`cotengra.einsum_expression`. .. py:data:: contract Alias for :func:`cotengra.einsum`. .. py:function:: hyper_optimize(inputs, output, size_dict, memory_limit=None, **opts)