:py:mod:`cotengra.core_multi` ============================= .. py:module:: cotengra.core_multi Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: cotengra.core_multi.ContractionTreeMulti .. py:class:: ContractionTreeMulti(inputs, output, size_dict, sliced_inds, objective, track_cache=False) Bases: :py:obj:`cotengra.core.ContractionTree` Binary tree representing a tensor network contraction. :param inputs: The list of input tensor's indices. :type inputs: sequence of str :param output: The output indices. :type output: str :param size_dict: The size of each index. :type size_dict: dict[str, int] :param track_childless: Whether to dynamically keep track of which nodes are childless. Useful if you are 'divisively' building the tree. :type track_childless: bool, optional :param track_flops: Whether to dynamically keep track of the total number of flops. If ``False`` You can still compute this once the tree is complete. :type track_flops: bool, optional :param track_write: Whether to dynamically keep track of the total number of elements written. If ``False`` You can still compute this once the tree is complete. :type track_write: bool, optional :param track_size: Whether to dynamically keep track of the largest tensor so far. If ``False`` You can still compute this once the tree is complete. :type track_size: bool, optional :param objective: An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed. :type objective: str or Objective, optional .. attribute:: children Mapping of each node to two children. :type: dict[node, tuple[node]] .. attribute:: info Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective 'leg' indices, size, flops of formation etc. :type: dict[node, dict] .. py:method:: set_state_from(other) Set the internal state of this tree to that of ``other``. .. py:method:: _remove_node(node) Remove ``node`` from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing. .. py:method:: _update_tracked(node) .. py:method:: get_node_var_inds(node) Get the set of variable indices that a node depends on. .. py:method:: get_node_is_bright(node) Get whether a node is 'bright', i.e. contains a different set of variable indices to either of its children, if a node is not bright then its children never have to be stored in the cache. .. py:method:: get_node_mult(node) Get the estimated 'multiplicity' of a node, i.e. the number of times it will have to be recomputed for different index configurations. .. py:method:: get_node_cache_mult(node, sliced_ind_ordering) Get the estimated 'cache multiplicity' of a node, i.e. the total number of versions with different index configurations that must be stored simultaneously in the cache. .. py:method:: get_flops(node) The the estimated total cost of computing a node for all index configurations. .. py:method:: get_cache_contrib(node) .. py:method:: peak_size(log=None) Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree. .. py:method:: reorder_contractions_for_peak_est() Reorder the contractions to try and reduce the peak memory usage. .. py:method:: reorder_sliced_inds() .. py:method:: exact_multi_stats(configs)