:py:mod:`cotengra.core` ======================= .. py:module:: cotengra.core .. autoapi-nested-parse:: Core contraction tree data structure and methods. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: cotengra.core.SliceInfo cotengra.core.ContractionTree cotengra.core.ContractionTreeCompressed cotengra.core.ContractionTreeMulti cotengra.core.PartitionTreeBuilder Functions ~~~~~~~~~ .. autoapisummary:: cotengra.core.cached_node_property cotengra.core.union_it cotengra.core.legs_union cotengra.core.legs_without cotengra.core.get_with_default cotengra.core.get_slice_strides cotengra.core._reconfigure_tree cotengra.core._slice_and_reconfigure_tree cotengra.core._get_tree_info cotengra.core._describe_tree cotengra.core.jitter cotengra.core.jitter_dict cotengra.core.separate .. py:function:: cached_node_property(name) Decorator for caching information about nodes. .. py:function:: union_it(bs) Non-variadic version of various set type unions. .. py:function:: legs_union(legs_seq) Combine a sequence of legs into a single set of legs, summing their appearances. .. py:function:: legs_without(legs, ind) Discard ``ind`` from legs to create a new set of legs. .. py:function:: get_with_default(k, obj, default) .. py:class:: SliceInfo .. py:property:: sliced_range .. py:attribute:: inner :type: bool .. py:attribute:: ind :type: str .. py:attribute:: size :type: int .. py:attribute:: project :type: Optional[int] .. py:function:: get_slice_strides(sliced_inds) Compute the 'strides' given the (ordered) dictionary of sliced indices. .. py:class:: ContractionTree(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None) Binary tree representing a tensor network contraction. :param inputs: The list of input tensor's indices. :type inputs: sequence of str :param output: The output indices. :type output: str :param size_dict: The size of each index. :type size_dict: dict[str, int] :param track_childless: Whether to dynamically keep track of which nodes are childless. Useful if you are 'divisively' building the tree. :type track_childless: bool, optional :param track_flops: Whether to dynamically keep track of the total number of flops. If ``False`` You can still compute this once the tree is complete. :type track_flops: bool, optional :param track_write: Whether to dynamically keep track of the total number of elements written. If ``False`` You can still compute this once the tree is complete. :type track_write: bool, optional :param track_size: Whether to dynamically keep track of the largest tensor so far. If ``False`` You can still compute this once the tree is complete. :type track_size: bool, optional :param objective: An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed. :type objective: str or Objective, optional .. attribute:: children Mapping of each node to two children. :type: dict[node, tuple[node]] .. attribute:: info Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective 'leg' indices, size, flops of formation etc. :type: dict[node, dict] .. py:property:: nslices Simple alias for how many independent contractions this tree represents overall. .. py:property:: nchunks The number of 'chunks' - determined by the number of sliced output indices. .. py:attribute:: total_cost .. py:attribute:: contraction_cost_compressed .. py:attribute:: total_cost_compressed .. py:attribute:: remove_ind_ .. py:attribute:: restore_ind_ .. py:attribute:: unslice_rand_ .. py:attribute:: unslice_all_ .. py:attribute:: subtree_reconfigure_ .. py:attribute:: subtree_reconfigure_forest_ .. py:attribute:: simulated_anneal .. py:attribute:: simulated_anneal_ .. py:attribute:: parallel_temper .. py:attribute:: parallel_temper_ .. py:attribute:: slice_ .. py:attribute:: slice_and_reconfigure_ .. py:attribute:: slice_and_reconfigure_forest_ .. py:attribute:: compressed_reconfigure_ .. py:attribute:: windowed_reconfigure_ .. py:attribute:: path .. py:attribute:: ssa_path .. py:attribute:: path_surface .. py:attribute:: ssa_path_surface .. py:attribute:: plot_ring .. py:attribute:: plot_tent .. py:attribute:: plot_span .. py:attribute:: plot_flat .. py:attribute:: plot_circuit .. py:attribute:: plot_rubberband .. py:attribute:: plot_contractions .. py:attribute:: plot_contractions_alt .. py:method:: set_state_from(other) Set the internal state of this tree to that of ``other``. .. py:method:: copy() Create a copy of this ``ContractionTree``. .. py:method:: set_default_objective(objective) Set the objective function for this tree. .. py:method:: get_default_objective() Get the objective function for this tree. .. py:method:: get_default_combo_factor() Get the default combo factor for this tree. .. py:method:: node_to_terms(node) Turn a node -- a frozen set of ints -- into the corresponding terms -- a sequence of sets of str corresponding to input indices. .. py:method:: gen_leaves() Generate the nodes representing leaves of the contraction tree, i.e. of size 1 each corresponding to a single input tensor. .. py:method:: get_incomplete_nodes() Get the set of current nodes that have no children and the set of nodes that have no parents. These are the 'childless' and 'parentless' nodes respectively, that need to be contracted to complete the tree. The parentless nodes are grouped into the childless nodes that contain them as subgraphs. :returns: **groups** -- A mapping of childless nodes to the list of parentless nodes are beneath them. :rtype: dict[frozenet[int], list[frozenset[int]]] .. seealso:: :obj:`autocomplete` .. py:method:: autocomplete(**contract_opts) Contract all remaining node groups (as computed by ``tree.get_incomplete_nodes``) in the tree to complete it. :param contract_opts: Options to pass to ``tree.contract_nodes``. .. seealso:: :obj:`get_incomplete_nodes`, :obj:`contract_nodes` .. py:method:: from_path(inputs, output, size_dict, *, path=None, ssa_path=None, autocomplete='auto', check=False, **kwargs) :classmethod: Create a (completed) ``ContractionTree`` from the usual inputs plus a standard contraction path or 'ssa_path' - you need to supply one. :param inputs: The input indices of each tensor, as single unicode characters. :type inputs: Sequence[Sequence[str]] :param output: The output indices. :type output: Sequence[str] :param size_dict: The size of each index. :type size_dict: dict[str, int] :param path: The contraction path, a sequence of pairs of tensor ids to contract. The ids are linear indices into the list of temporary tensors, which are recycled as each contraction pops a pair and appends the result. This or ``ssa_path`` must be supplied. :type path: Sequence[Sequence[int]], optional :param ssa_path: The contraction path, a sequence of pairs of indices to contract. The indices are single use, as if the result of each contraction is appended to the end of the list of temporary tensors without popping. This or ``path`` must be supplied. :type ssa_path: Sequence[Sequence[int]], optional :param autocomplete: Whether to automatically complete the path, i.e. contract all remaining nodes. If "auto" then a warning is issued if the path is not complete. :type autocomplete: "auto" or bool, optional :param check: Whether to perform some basic checks while creating the contraction nodes. :type check: bool, optional :rtype: ContractionTree .. py:method:: from_info(info, **kwargs) :classmethod: Create a ``ContractionTree`` from an ``opt_einsum.PathInfo`` object. .. py:method:: from_eq(eq, size_dict, **kwargs) :classmethod: Create a empty ``ContractionTree`` directly from an equation and set of shapes. :param eq: The einsum string equation. :type eq: str :param size_dict: The size of each index. :type size_dict: dict[str, int] .. py:method:: get_eq() Get the einsum equation corresponding to this tree. Note that this is the total (or original) equation, so includes indices which have been sliced. :returns: **eq** :rtype: str .. py:method:: get_shapes() Get the shapes of the input tensors corresponding to this tree. :returns: **shapes** :rtype: tuple[tuple[int]] .. py:method:: get_inputs_sliced() Get the input indices corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **inputs** :rtype: tuple[tuple[str]] .. py:method:: get_output_sliced() Get the output indices corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **output** :rtype: tuple[str] .. py:method:: get_eq_sliced() Get the einsum equation corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **eq** :rtype: str .. py:method:: get_shapes_sliced() Get the shapes of the input tensors corresponding to a single slice of this tree, i.e. with sliced indices removed. :returns: **shapes** :rtype: tuple[tuple[int]] .. py:method:: from_edge_path(edge_path, inputs, output, size_dict, check=False, **kwargs) :classmethod: Create a ``ContractionTree`` from an edge elimination ordering. .. py:method:: _add_node(node, check=False) .. py:method:: _remove_node(node) Remove ``node`` from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing. .. py:method:: compute_leaf_legs(i) Compute the effective 'outer' indices for the ith input tensor. This is not always simply the ith input indices, due to A) potential slicing and B) potential preprocessing. .. py:method:: has_preprocessing() .. py:method:: get_legs(node) Get the effective 'outer' indices for the collection of tensors in ``node``. .. py:method:: get_involved(node) Get all the indices involved in the formation of subgraph ``node``. .. py:method:: get_size(node) Get the tensor size of ``node``. .. py:method:: get_flops(node) Get the FLOPs for the pairwise contraction that will create ``node``. .. py:method:: get_can_dot(node) Get whether this contraction can be performed as a dot product (i.e. with ``tensordot``), or else requires ``einsum``, as it has indices that don't appear exactly twice in either the inputs or the output. .. py:method:: get_inds(node) Get the indices of this node - an ordered string version of ``get_legs`` that starts with ``tree.inputs`` and maintains the order they appear in each contraction 'ABC,abc->ABCabc', to match tensordot. .. py:method:: get_tensordot_axes(node) Get the ``axes`` arg for a tensordot ocontraction that produces ``node``. The pairs are sorted in order of appearance on the left input. .. py:method:: get_tensordot_perm(node) Get the permutation required, if any, to bring the tensordot output of this nodes contraction into line with ``self.get_inds(node)``. .. py:method:: get_einsum_eq(node) Get the einsum string describing the contraction that produces ``node``, unlike ``get_inds`` the characters are mapped into [a-zA-Z], for compatibility with ``numpy.einsum`` for example. .. py:method:: get_centrality(node) .. py:method:: total_flops(dtype=None, log=None) Sum the flops contribution from every node in the tree. :param dtype: Scale the answer depending on the assumed data type. :type dtype: {'float', 'complex', None}, optional .. py:method:: total_write() Sum the total amount of memory that will be created and operated on. .. py:method:: combo_cost(factor=DEFAULT_COMBO_FACTOR, combine=sum, log=None) .. py:method:: max_size(log=None) The size of the largest intermediate tensor. .. py:method:: peak_size(order=None, log=None) Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree. .. py:method:: contract_stats(force=False) Simulteneously compute the total flops, write and size of the contraction tree. This is more efficient than calling each of the individual methods separately. Once computed, each quantity is then automatically tracked. :returns: **stats** -- The total flops, write and size. :rtype: dict[str, int] .. py:method:: arithmetic_intensity() The ratio of total flops to total write - the higher the better for extracting good computational performance. .. py:method:: contraction_scaling() This is computed simply as the maximum number of indices involved in any single contraction, which will match the scaling assuming that all dimensions are equal. .. py:method:: contraction_cost(log=None) Get the total number of scalar operations ~ time complexity. .. py:method:: contraction_width(log=2) Get log2 of the size of the largest tensor. .. py:method:: compressed_contract_stats(chi=None, order='surface_order', compress_late=None) .. py:method:: total_flops_compressed(chi=None, order='surface_order', compress_late=None, dtype=None, log=None) Estimate the total flops for a compressed contraction of this tree with maximum bond size ``chi``. This includes basic estimates of the ops to perform contractions, QRs and SVDs. .. py:method:: total_write_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None) Compute the total size of all intermediate tensors when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is relevant maybe for time complexity and e.g. autodiff space complexity (since every intermediate is kept). .. py:method:: combo_cost_compressed(chi=None, order='surface_order', compress_late=None, factor=None, log=None) .. py:method:: max_size_compressed(chi=None, order='surface_order', compress_late=None, log=None) Compute the maximum sized tensor produced when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is close to the ideal space complexity if only tensors that are being directly operated on are kept in memory. .. py:method:: peak_size_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None) Compute the peak size of combined intermediate tensors when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. This is the practical space complexity if one is not swapping intermediates in and out of memory. .. py:method:: contraction_width_compressed(chi=None, order='surface_order', compress_late=None, log=2) Compute log2 of the maximum sized tensor produced when a compressed contraction is performed with maximum bond size ``chi``, ordered by ``order``. .. py:method:: _update_tracked(node) .. py:method:: contract_nodes_pair(x, y, legs=None, cost=None, size=None, check=False) Contract node ``x`` with node ``y`` in the tree to create a new parent node, which is returned. :param x: The first node to contract. :type x: frozenset[int] :param y: The second node to contract. :type y: frozenset[int] :param legs: The effective 'legs' of the new node if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type legs: dict[str, int], optional :param cost: The cost of the contraction if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type cost: int, optional :param size: The size of the new node if already known. If not given, this is computed from the inputs of ``x`` and ``y``. :type size: int, optional :param check: Whether to check the inputs are valid. :type check: bool, optional :returns: **parent** -- The new parent node of ``x`` and ``y``. :rtype: frozenset[int] .. py:method:: contract_nodes(nodes, optimize='auto-hq', check=False, extra_opts=None) Contract an arbitrary number of ``nodes`` in the tree to build up a subtree. The root of this subtree (a new intermediate) is returned. .. py:method:: is_complete() Check every node has two children, unless it is a leaf. .. py:method:: get_default_order() .. py:method:: _traverse_ordered(order) Traverse the tree in the order that minimizes ``order(node)``, but still constrained to produce children before parents. .. py:method:: traverse(order=None) Generate, in order, all the node merges in this tree. Non-recursive! This ensures children are always visited before their parent. :param order: How to order the contractions within the tree. If a callable is given (which should take a node as its argument), try to contract nodes that minimize this function first. :type order: None or callable, optional :returns: The bottom up ordered sequence of tree merges, each a tuple of ``(parent, left_child, right_child)``. :rtype: generator[tuple[node]] .. seealso:: :obj:`descend` .. py:method:: descend(mode='dfs') Generate, from root to leaves, all the node merges in this tree. Non-recursive! This ensures parents are visited before their children. :param mode: How expand from a parent. :type mode: {'dfs', bfs}, optional :returns: The top down ordered sequence of tree merges, each a tuple of ``(parent, left_child, right_child)``. :rtype: generator[tuple[node] .. seealso:: :obj:`traverse` .. py:method:: get_subtree(node, size, search='bfs', seed=None) Get a subtree spanning down from ``node`` which will have ``size`` leaves (themselves not necessarily leaves of the actual tree). :param node: The node of the tree to start with. :type node: node :param size: How many subtree leaves to aim for. :type size: int :param search: How to build the tree: - 'bfs': breadth first expansion - 'dfs': depth first expansion (largest nodes first) - 'random': random expansion :type search: {'bfs', 'dfs', 'random'}, optional :param seed: Random number generator seed, if ``search`` is 'random'. :type seed: None, int or random.Random, optional :returns: * **sub_leaves** (*tuple[node]*) -- Nodes which are subtree leaves. * **branches** (*tuple[node]*) -- Nodes which are between the subtree leaves and root. .. py:method:: remove_ind(ind, project=None, inplace=False) Remove (i.e. by default slice) index ``ind`` from this contraction tree, taking care to update all relevant information about each node. .. py:method:: restore_ind(ind, inplace=False) Restore (unslice or un-project) index ``ind`` to this contraction tree, taking care to update all relevant information about each node. :param ind: The index to restore. :type ind: str :param inplace: Whether to perform the restoration inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: unslice_rand(seed=None, inplace=False) Unslice (restore) a random index from this contraction tree. :param seed: Random number generator seed. :type seed: None, int or random.Random, optional :param inplace: Whether to perform the unslicing inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: unslice_all(inplace=False) Unslice (restore) all sliced indices from this contraction tree. :param inplace: Whether to perform the unslicing inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: calc_subtree_candidates(pwr=2, what='flops') .. py:method:: subtree_reconfigure(subtree_size=8, subtree_search='bfs', weight_what='flops', weight_pwr=2, select='max', maxiter=500, seed=None, minimize=None, optimize=None, inplace=False, progbar=False) Reconfigure subtrees of this tree with locally optimal paths. :param subtree_size: The size of subtree to consider. Cost is exponential in this. :type subtree_size: int, optional :param subtree_search: How to build the subtrees: - 'bfs': breadth-first-search creating balanced subtrees - 'dfs': depth-first-search creating imbalanced subtrees - 'random': random subtree building :type subtree_search: {'bfs', 'dfs', 'random'}, optional :param weight_what: When assessing nodes to build and optimize subtrees from whether to score them by the (local) contraction cost, or tensor size. :type weight_what: {'flops', 'size'}, optional :param weight_pwr: When assessing nodes to build and optimize subtrees from, how to scale their score into a probability: ``score**(1 / weight_pwr)``. The larger this is the more explorative the algorithm is when ``select='random'``. :type weight_pwr: int, optional :param select: What order to select node subtrees to optimize: - 'max': choose the highest score first - 'min': choose the lowest score first - 'random': choose randomly weighted on score -- see ``weight_pwr``. :type select: {'max', 'min', 'random'}, optional :param maxiter: How many subtree optimizations to perform, the algorithm can terminate before this if all subtrees have been optimized. :type maxiter: int, optional :param seed: A random seed (seeds python system random module). :type seed: int, optional :param minimize: Whether to minimize with respect to contraction flops or size. :type minimize: {'flops', 'size'}, optional :param inplace: Whether to perform the reconfiguration inplace or not. :type inplace: bool, optional :param progbar: Whether to show live progress of the reconfiguration. :type progbar: bool, optional :rtype: ContractionTree .. py:method:: subtree_reconfigure_forest(num_trees=8, num_restarts=10, restart_fraction=0.5, subtree_maxiter=100, subtree_size=10, subtree_search=('random', 'bfs'), subtree_select=('random', ), subtree_weight_what=('flops', 'size'), subtree_weight_pwr=(2, ), parallel='auto', parallel_maxiter_steps=4, minimize=None, seed=None, progbar=False, inplace=False) 'Forested' version of ``subtree_reconfigure`` which is more explorative and can be parallelized. It stochastically generates a 'forest' reconfigured trees, then only keeps some fraction of these to generate the next forest. :param num_trees: The number of trees to reconfigure at each stage. :type num_trees: int, optional :param num_restarts: The number of times to halt, prune and then restart the tree reconfigurations. :type num_restarts: int, optional :param restart_fraction: The fraction of trees to keep at each stage and generate the next forest from. :type restart_fraction: float, optional :param subtree_maxiter: Number of subtree reconfigurations per step. ``num_restarts * subtree_maxiter`` is the max number of total subtree reconfigurations for the final tree produced. :type subtree_maxiter: int, optional :param subtree_size: The size of subtrees to search for and reconfigure. :type subtree_size: int, optional :param subtree_search: Tuple of options for the ``search`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_search: tuple[{'random', 'bfs', 'dfs'}], optional :param subtree_select: Tuple of options for the ``select`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_select: tuple[{'random', 'max', 'min'}], optional :param subtree_weight_what: Tuple of options for the ``weight_what`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_weight_what: tuple[{'flops', 'size'}], optional :param subtree_weight_pwr: Tuple of options for the ``weight_pwr`` kwarg of :meth:`ContractionTree.subtree_reconfigure` to randomly sample. :type subtree_weight_pwr: tuple[int], optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param parallel_maxiter_steps: If parallelizing, how many steps to break each reconfiguration into in order to evenly saturate many processes. :type parallel_maxiter_steps: int, optional :param minimize: Whether to minimize the total flops or maximum size of the contraction tree. :type minimize: {'flops', 'size', ..., Objective}, optional :param seed: A random seed to use. :type seed: None, int or random.Random, optional :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the subtree reconfiguration inplace. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: slice(target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, seed=None, inplace=False) Slice this tree (turn some indices into indices which are explicitly summed over rather than being part of contractions). The indices are stored in ``tree.sliced_inds``, and the contraction width updated to take account of the slicing. Calling ``tree.contract(arrays)`` moreover which automatically perform the slicing and summation. :param target_size: The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached. :type target_size: int, optional :param target_slices: The target or minimum number of 'slices' to consider - individual contractions after slicing indices. The search algorithm will terminate after this is breached. This is on top of the current number of slices. :type target_slices: int, optional :param target_overhead: The target increase in total number of floating point operations. For example, a value of ``2.0`` will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once. :type target_overhead: float, optional :param temperature: How much to randomize the repeated search. :type temperature: float, optional :param minimize: Which metric to score the overhead increase against. :type minimize: {'flops', 'size', ..., Objective}, optional :param allow_outer: Whether to allow slicing of outer indices. :type allow_outer: bool, optional :param max_repeats: How many times to repeat the search with a slight randomization. :type max_repeats: int, optional :param reslice: Whether to reslice the tree, i.e. first remove all currently sliced indices and start the search again. Generally any 'good' sliced indices will be easily found again. :type reslice: bool, optional :param seed: A random seed or generator to use for the search. :type seed: None, int or random.Random, optional :param inplace: Whether the remove the indices from this tree inplace or not. :type inplace: bool, optional :rtype: ContractionTree .. seealso:: :obj:`SliceFinder`, :obj:`ContractionTree.slice_and_reconfigure` .. py:method:: slice_and_reconfigure(target_size, step_size=2, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, reconf_opts=None, progbar=False, inplace=False) Interleave slicing (removing indices into an exterior sum) with subtree reconfiguration to minimize the overhead induced by this slicing. :param target_size: Slice the tree until the maximum intermediate size is this or smaller. :type target_size: int :param step_size: The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration. :type step_size: int, optional :param temperature: The temperature to supply to ``SliceFinder`` for searching for indices. :type temperature: float, optional :param minimize: The metric to minimize when slicing and reconfiguring subtrees. :type minimize: {'flops', 'size', ..., Objective}, optional :param max_repeats: The number of slicing attempts to perform per search. :type max_repeats: int, optional :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the slicing and reconfiguration inplace. :type inplace: bool, optional :param reconf_opts: Supplied to :meth:`ContractionTree.subtree_reconfigure` or :meth:`ContractionTree.subtree_reconfigure_forest`, depending on `'forested'` key value. :type reconf_opts: None or dict, optional .. py:method:: slice_and_reconfigure_forest(target_size, step_size=2, num_trees=8, restart_fraction=0.5, temperature=0.02, max_repeats=32, reslice=False, minimize=None, allow_outer=True, parallel='auto', progbar=False, inplace=False, reconf_opts=None) 'Forested' version of :meth:`ContractionTree.slice_and_reconfigure`. This maintains a 'forest' of trees with different slicing and subtree reconfiguration attempts, pruning the worst at each step and generating a new forest from the best. :param target_size: Slice the tree until the maximum intermediate size is this or smaller. :type target_size: int :param step_size: The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration. :type step_size: int, optional :param num_restarts: The number of times to halt, prune and then restart the tree reconfigurations. :type num_restarts: int, optional :param restart_fraction: The fraction of trees to keep at each stage and generate the next forest from. :type restart_fraction: float, optional :param temperature: The temperature at which to randomize the sliced index search. :type temperature: float, optional :param max_repeats: The number of slicing attempts to perform per search. :type max_repeats: int, optional :param parallel: Whether to parallelize the search. :type parallel: 'auto', False, True, int, or distributed.Client :param progbar: Whether to show live progress. :type progbar: bool, optional :param inplace: Whether to perform the slicing and reconfiguration inplace. :type inplace: bool, optional :param reconf_opts: Supplied to :meth:`ContractionTree.slice_and_reconfigure`. :type reconf_opts: None or dict, optional :rtype: ContractionTree .. py:method:: compressed_reconfigure(minimize=None, order_only=False, max_nodes='auto', max_time=None, local_score=None, exploration_power=0, best_score=None, progbar=False, inplace=False) Reconfigure this tree according to ``peak_size_compressed``. :param chi: The maximum bond dimension to consider. :type chi: int :param order_only: Whether to only consider the ordering of the current tree contractions, or all possible contractions, starting with the current. :type order_only: bool, optional :param max_nodes: Set the maximum number of contraction steps to consider. :type max_nodes: int, optional :param max_time: Set the maximum time to spend on the search. :type max_time: float, optional :param local_score: A function that assigns a score to a potential contraction, with a lower score giving more priority to explore that contraction earlier. It should have signature:: local_score(step, new_score, dsize, new_size) where ``step`` is the number of steps so far, ``new_score`` is the score of the contraction so far, ``dsize`` is the change in memory by the current step, and ``new_size`` is the new memory size after contraction. :type local_score: callable, optional :param exploration_power: If not ``0.0``, the inverse power to which the step is raised in the default local score function. Higher values favor exploring more promising branches early on - at the cost of increased memory. Ignored if ``local_score`` is supplied. :type exploration_power: float, optional :param best_score: Manually specify an upper bound for best score found so far. :type best_score: float, optional :param progbar: If ``True``, display a progress bar. :type progbar: bool, optional :param inplace: Whether to perform the reconfiguration inplace on this tree. :type inplace: bool, optional :rtype: ContractionTree .. py:method:: windowed_reconfigure(minimize=None, order_only=False, window_size=20, max_iterations=100, max_window_tries=1000, score_temperature=0.0, queue_temperature=1.0, scorer=None, queue_scorer=None, seed=None, inplace=False, progbar=False, **kwargs) .. py:method:: flat_tree(order=None) Create a nested tuple representation of the contraction tree like:: ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) Such that the contraction will progress like:: ((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) ((0, 12), (34, ((5, 67), 89))) (012, (34, (567, 89))) (012, (34, 56789)) (012, 3456789) 0123456789 Where each integer represents a leaf (i.e. single element node). .. py:method:: get_leaves_ordered() Return the list of leaves as ordered by the contraction tree. :rtype: tuple[frozenset[str]] .. py:method:: get_path(order=None) Generate a standard path from the contraction tree. .. py:method:: get_numpy_path(order=None) Generate a path compatible with the `optimize` kwarg of `numpy.einsum`. .. py:method:: get_ssa_path(order=None) Generate a ssa path from the contraction tree. .. py:method:: surface_order(node) .. py:method:: set_surface_order_from_path(ssa_path) .. py:method:: get_path_surface() .. py:method:: get_ssa_path_surface() .. py:method:: get_spans() Get all (which could mean none) potential embeddings of this contraction tree into a spanning tree of the original graph. :rtype: tuple[dict[frozenset[int], frozenset[int]]] .. py:method:: compute_centralities(combine='mean') Compute a centrality for every node in this contraction tree. .. py:method:: get_hypergraph(accel=False) Get a hypergraph representing the uncontracted network (i.e. the leaves). .. py:method:: reset_contraction_indices() Reset all information regarding the explicit contraction indices ordering. .. py:method:: sort_contraction_indices(priority='flops', make_output_contig=True, make_contracted_contig=True, reset=True) Set explicit orders for the contraction indices of this self to optimize for one of two things: contiguity in contracted ('k') indices, or contiguity of left and right output ('m' and 'n') indices. :param priority: Which order to process the intermediate nodes in. Later nodes re-sort previous nodes so are more likely to keep their ordering. E.g. for 'flops' the mostly costly contracton will be process last and thus will be guaranteed to have its indices exactly sorted. :type priority: {'flops', 'size', 'root', 'leaves'}, optional :param make_output_contig: When processing a pairwise contraction, sort the parent contraction indices so that the order of indices is the order they appear from left to right in the two child (input) tensors. :type make_output_contig: bool, optional :param make_contracted_contig: When processing a pairwise contraction, sort the child (input) tensor indices so that all contracted indices appear contiguously. :type make_contracted_contig: bool, optional :param reset: Reset all indices to the default order before sorting. :type reset: bool, optional .. py:method:: print_contractions(sort=None, show_brackets=True) Print each pairwise contraction, with colorized indices (if `colorama` is installed), and other information. .. py:method:: get_contractor(order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, implementation=None, autojit=False, progbar=False) Get a reusable function which performs the contraction corresponding to this tree, cached. :param tree: The contraction tree. :type tree: ContractionTree :param order: Supplied to :meth:`ContractionTree.traverse`, the order in which to perform the pairwise contractions given by the tree. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, the function will strip the exponent from the output array and return it separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param progbar: Whether to show progress through the contraction by default. :type progbar: bool, optional :returns: **fn** -- The contraction function, with signature ``fn(*arrays)``. :rtype: callable .. py:method:: contract_core(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False) Contract ``arrays`` with this tree. The order of the axes and output is assumed to be that of ``tree.inputs`` and ``tree.output``, but with sliced indices removed. This functon contracts the core tree and thus if indices have been sliced the arrays supplied need to be sliced as well. :param arrays: The arrays to contract. :type arrays: sequence of array :param order: Supplied to :meth:`ContractionTree.traverse`. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param backend: What library to use for ``einsum`` and ``transpose``, will be automatically inferred from the arrays if not given. :type backend: str, optional :param autojit: Whether to use ``autoray.autojit`` to jit compile the expression. If "auto", then let ``cotengra`` choose. :type autojit: "auto" or bool, optional :param progbar: Show progress through the contraction. :type progbar: bool, optional .. py:method:: slice_key(i, strides=None) Get the combination of sliced index values for overall slice ``i``. :param i: The overall slice index. :type i: int :returns: **key** -- The value each sliced index takes for slice ``i``. :rtype: dict[str, int] .. py:method:: slice_arrays(arrays, i) Take ``arrays`` and slice the relevant inputs according to ``tree.sliced_inds`` and the dynary representation of ``i``. .. py:method:: contract_slice(arrays, i, **kwargs) Get slices ``i`` of ``arrays`` and then contract them. .. py:method:: gather_slices(slices, backend=None, progbar=False) Gather all the output contracted slices into a single full result. If none of the sliced indices appear in the output, then this is a simple sum - otherwise the slices need to be partially summed and partially stacked. .. py:method:: gen_output_chunks(arrays, with_key=False, progbar=False, **contract_opts) Generate each output chunk of the contraction - i.e. take care of summing internally sliced indices only first. This assumes that the ``sliced_inds`` are sorted by whether they appear in the output or not (the default order). Useful for performing some kind of reduction over the final tensor object like ``fn(x).sum()`` without constructing the entire thing. :param arrays: The arrays to contract. :type arrays: sequence of array :param with_key: Whether to yield the output index configuration key along with the chunk. :type with_key: bool, optional :param progbar: Show progress through the contraction chunks. :type progbar: bool, optional :Yields: * **chunk** (*array*) -- A chunk of the contracted result. * **key** (*dict[str, int]*) -- The value each sliced output index takes for this chunk. .. py:method:: contract(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False) Contract ``arrays`` with this tree. This function takes *unsliced* arrays and handles the slicing, contractions and gathering. The order of the axes and output is assumed to match that of ``tree.inputs`` and ``tree.output``. :param arrays: The arrays to contract. :type arrays: sequence of array :param order: Supplied to :meth:`ContractionTree.traverse`. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, eagerly strip the exponent (in log10) from intermediate tensors to control numerical problems from leaving the range of the datatype. This method then returns the scaled 'mantissa' output array and the exponent separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param backend: What library to use for ``tensordot``, ``einsum`` and ``transpose``, it will be automatically inferred from the input arrays if not given. :type backend: str, optional :param autojit: Whether to use the 'autojit' feature of `autoray` to compile the contraction expression. :type autojit: bool, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :returns: * **output** (*array*) -- The contracted output, it will be scaled if ``strip_exponent==True``. * **exponent** (*float*) -- The exponent of the output in base 10, returned only if ``strip_exponent==True``. .. seealso:: :obj:`contract_core`, :obj:`contract_slice`, :obj:`slice_arrays`, :obj:`gather_slices` .. py:method:: contract_mpi(arrays, comm=None, root=None, **kwargs) Contract the slices of this tree and sum them in parallel - *assuming* we are already running under MPI. :param arrays: The input (unsliced arrays) :type arrays: sequence of array :param comm: Defaults to ``mpi4py.MPI.COMM_WORLD`` if not given. :type comm: None or mpi4py communicator :param root: If ``root=None``, an ``Allreduce`` will be performed such that every process has the resulting tensor, else if an integer e.g. ``root=0``, the result will be exclusively gathered to that process using ``Reduce``, with every other process returning ``None``. :type root: None or int, optional :param kwargs: Supplied to :meth:`~cotengra.ContractionTree.contract_slice`. .. py:method:: benchmark(dtype, max_time=60, min_reps=3, max_reps=100, warmup=True, **contract_opts) Benchmark the contraction of this tree. :param dtype: The datatype to use. :type dtype: {"float32", "float64", "complex64", "complex128"} :param max_time: The maximum time to spend benchmarking in seconds. :type max_time: float, optional :param min_reps: The minimum number of repetitions to perform, regardless of time. :type min_reps: int, optional :param max_reps: The maximum number of repetitions to perform, regardless of time. :type max_reps: int, optional :param warmup: Whether to perform a warmup run before the benchmark. If an int, the number of warmup runs to perform. :type warmup: bool or int, optional :param contract_opts: Supplied to :meth:`~cotengra.ContractionTree.contract_slice`. :returns: A dictionary of benchmarking results. The keys are: - "time_per_slice" : float The average time to contract a single slice. - "est_time_total" : float The estimated total time to contract all slices. - "est_gigaflops" : float The estimated gigaflops of the contraction. :rtype: dict .. seealso:: :obj:`contract_slice` .. py:method:: plot_hypergraph(**kwargs) .. py:method:: describe(info='normal', join=' ') Return a string describing the contraction tree. .. py:method:: __repr__() Return repr(self). .. py:method:: __str__() Return str(self). .. py:function:: _reconfigure_tree(tree, *args, **kwargs) .. py:function:: _slice_and_reconfigure_tree(tree, *args, **kwargs) .. py:function:: _get_tree_info(tree) .. py:function:: _describe_tree(tree, info='normal') .. py:class:: ContractionTreeCompressed(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None) Bases: :py:obj:`ContractionTree` A contraction tree for compressed contractions. Currently the only difference is that this defaults to the 'surface' traversal ordering. .. py:attribute:: total_flops .. py:attribute:: total_write .. py:attribute:: combo_cost .. py:attribute:: total_cost .. py:attribute:: max_size .. py:attribute:: peak_size .. py:attribute:: contraction_cost .. py:attribute:: contraction_width .. py:attribute:: total_flops_exact .. py:attribute:: total_write_exact .. py:attribute:: combo_cost_exact .. py:attribute:: total_cost_exact .. py:attribute:: max_size_exact .. py:attribute:: peak_size_exact .. py:method:: from_path(inputs, output, size_dict, *, path=None, ssa_path=None, check=False, **kwargs) :classmethod: Create a (completed) ``ContractionTreeCompressed`` from the usual inputs plus a standard contraction path or 'ssa_path' - you need to supply one. This also set the default 'surface' traversal ordering to be the initial path. .. py:method:: get_default_order() .. py:method:: get_default_objective() Get the objective function for this tree. .. py:method:: get_default_chi() .. py:method:: get_default_compress_late() .. py:method:: get_contractor(*_, **__) :abstractmethod: Get a reusable function which performs the contraction corresponding to this tree, cached. :param tree: The contraction tree. :type tree: ContractionTree :param order: Supplied to :meth:`ContractionTree.traverse`, the order in which to perform the pairwise contractions given by the tree. :type order: str or callable, optional :param prefer_einsum: Prefer to use ``einsum`` for pairwise contractions, even if ``tensordot`` can perform the contraction. :type prefer_einsum: bool, optional :param strip_exponent: If ``True``, the function will strip the exponent from the output array and return it separately. :type strip_exponent: bool, optional :param check_zero: If ``True``, when ``strip_exponent=True``, explicitly check for zero-valued intermediates that would otherwise produce ``nan``, instead terminating early if encountered and returning ``(0.0, 0.0)``. :type check_zero: bool, optional :param implementation: What library to use to actually perform the contractions. Options are: - None: let cotengra choose. - "autoray": dispatch with autoray, using the ``tensordot`` and ``einsum`` implementation of the backend. - "cotengra": use the ``tensordot`` and ``einsum`` implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don't yet provide ``tensordot`` and ``einsum`` to be used. - "cuquantum": use the cuquantum library to perform the whole contraction (not just individual contractions). - tuple[callable, callable]: manually supply the ``tensordot`` and ``einsum`` implementations to use. :type implementation: str or tuple[callable, callable], optional :param autojit: If ``True``, use :func:`autoray.autojit` to compile the contraction function. :type autojit: bool, optional :param progbar: Whether to show progress through the contraction by default. :type progbar: bool, optional :returns: **fn** -- The contraction function, with signature ``fn(*arrays)``. :rtype: callable .. py:class:: ContractionTreeMulti(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None) Bases: :py:obj:`ContractionTree` Binary tree representing a tensor network contraction. :param inputs: The list of input tensor's indices. :type inputs: sequence of str :param output: The output indices. :type output: str :param size_dict: The size of each index. :type size_dict: dict[str, int] :param track_childless: Whether to dynamically keep track of which nodes are childless. Useful if you are 'divisively' building the tree. :type track_childless: bool, optional :param track_flops: Whether to dynamically keep track of the total number of flops. If ``False`` You can still compute this once the tree is complete. :type track_flops: bool, optional :param track_write: Whether to dynamically keep track of the total number of elements written. If ``False`` You can still compute this once the tree is complete. :type track_write: bool, optional :param track_size: Whether to dynamically keep track of the largest tensor so far. If ``False`` You can still compute this once the tree is complete. :type track_size: bool, optional :param objective: An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed. :type objective: str or Objective, optional .. attribute:: children Mapping of each node to two children. :type: dict[node, tuple[node]] .. attribute:: info Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective 'leg' indices, size, flops of formation etc. :type: dict[node, dict] .. py:method:: set_varmults(varmults) .. py:method:: get_varmults() .. py:method:: set_numconfigs(numconfigs) .. py:method:: get_numconfigs() .. py:class:: PartitionTreeBuilder(partition_fn) Function wrapper that takes a function that partitions graphs and uses it to build a contraction tree. ``partition_fn`` should have signature: def partition_fn(inputs, output, size_dict, weight_nodes, weight_edges, **kwargs): ... return membership Where ``weight_nodes`` and ``weight_edges`` decsribe how to weight the nodes and edges of the graph respectively and ``membership`` should be a list of integers of length ``len(inputs)`` labelling which partition each input node should be put it. .. py:method:: build_divide(inputs, output, size_dict, random_strength=0.01, cutoff=10, parts=2, parts_decay=0.5, sub_optimize='greedy', super_optimize='auto-hq', check=False, seed=None, **partition_opts) .. py:method:: build_agglom(inputs, output, size_dict, random_strength=0.01, groupsize=4, check=False, sub_optimize='greedy', seed=None, **partition_opts) .. py:method:: trial_fn(inputs, output, size_dict, **partition_opts) .. py:method:: trial_fn_agglom(inputs, output, size_dict, **partition_opts) .. py:function:: jitter(x, strength, rng) .. py:function:: jitter_dict(d, strength, seed=None) .. py:function:: separate(xs, blocks) Partition ``xs`` into ``n`` different list based on the corresponding labels in ``blocks``.