cotengra.core
#
Module Contents#
Classes#
Binary tree representing a tensor network contraction. 

A contraction tree for compressed contractions. Currently the only 

Binary tree representing a tensor network contraction. 

Function wrapper that takes a function that partitions graphs and 

Very simple linegraph builder and file writer. 

Simple hypergraph builder and writer. 
Functions#

Decorator for caching information about nodes. 

Nonvariadic version of various set type unions. 



































Get the correct 













Partition 



Single entrypoint for creating a, possibly accelerated, HyperGraph. 
Attributes#
 cotengra.core.DEFAULT_COMBO_FACTOR = 64#
 cotengra.core.HyperGraphRust#
 cotengra.core.cached_node_property(name)#
Decorator for caching information about nodes.
 cotengra.core.union_it(bs)#
Nonvariadic version of various set type unions.
 cotengra.core.get_with_default(k, obj, default)#
 class cotengra.core.ContractionTree(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False)#
Binary tree representing a tensor network contraction.
 Parameters:
inputs (sequence of str) – The list of input tensor’s indices.
output (str) – The output indices.
size_dict (dict[str, int]) – The size of each index.
track_childless (bool, optional) – Whether to dynamically keep track of which nodes are childless. Useful if you are ‘divisively’ building the tree.
track_flops (bool, optional) – Whether to dynamically keep track of the total number of flops. If
False
You can still compute this once the tree is complete.track_size (bool, optional) – Whether to dynamically keep track of the largest tensor so far. If
False
You can still compute this once the tree is complete.
 children#
Mapping of each node to two children.
 Type:
dict[node, tuple[node]
 info#
Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective ‘leg’ indices, size, flops of formation etc.
 Type:
dict[node, dict]
 remove_ind_#
 subtree_reconfigure_#
 subtree_reconfigure_forest_#
 slice_#
 slice_and_reconfigure_#
 slice_and_reconfigure_forest_#
 compressed_reconfigure_#
 path#
 ssa_path#
 path_surface#
 ssa_path_surface#
 plot_ring#
 plot_tent#
 plot_span#
 plot_rubberband#
 plot_contractions#
 plot_contractions_alt#
 set_state_from(other)#
Set the internal state of this tree to that of
other
.
 copy()#
Create a copy of this
ContractionTree
.
 property nslices#
Simple alias for how many independent contractions this tree represents overall.
 property nchunks#
The number of ‘chunks’  determined by the number of sliced output indices.
 node_to_terms(node)#
Turn a node – a frozen set of ints – into the corresponding terms – a sequence of sets of str corresponding to input indices.
 gen_leaves()#
Generate the nodes representing leaves of the contraction tree, i.e. of size 1 each corresponding to a single input tensor.
 classmethod from_path(inputs, output, size_dict, *, path=None, ssa_path=None, check=False, **kwargs)#
Create a (completed)
ContractionTree
from the usual inputs plus a standard contraction path or ‘ssa_path’  you need to supply one.
 classmethod from_info(info, **kwargs)#
Create a
ContractionTree
from anopt_einsum.PathInfo
object.
 classmethod from_eq(eq, size_dict, **kwargs)#
Create a empty
ContractionTree
directly from an equation and set of shapes. Parameters:
eq (str) – The einsum string equation.
size_dict (dict[str, int]) – The size of each index.
 classmethod from_edge_path(edge_path, inputs, output, size_dict, check=False, **kwargs)#
Create a
ContractionTree
from an edge elimination ordering.
 _add_node(node, check=False)#
 _remove_node(node)#
Remove
node
from this tree and update the flops and maximum size if tracking them respectively. Inplace operation.
 get_keep(node)#
Get a set of at least the indices that should be explicitly kept if they appear on
node
(or below).
 get_legs(node)#
Get the effective ‘outer’ indices for the collection of tensors in
node
.
 get_involved(node)#
Get all the indices involved in the formation of subgraph
node
.
 get_removed(node)#
Get the indices that will be removed by the creation of
node
.
 get_size(node)#
Get the tensor size of
node
.
 get_flops(node)#
Get the FLOPs for the pairwise contraction that will create
node
.
 get_can_dot(node)#
Get whether this contraction can be performed as a dot product (i.e. with
tensordot
), or else requireseinsum
, as it has indices that don’t appear exactly twice in either the inputs or the output.
 get_inds(node)#
Get the indices of this node  an ordered string version of
get_legs
that starts withtree.inputs
and maintains the order they appear in each contraction ‘ABC,abc>ABCabc’, to match tensordot.
 get_tensordot_axes(node)#
Get the
axes
arg for a tensordot ocontraction that producesnode
. The pairs are sorted in order of appearance on the left input.
 get_tensordot_perm(node)#
Get the permutation required, if any, to bring the tensordot output of this nodes contraction into line with
self.get_inds(node)
.
 get_einsum_eq(node)#
Get the einsum string describing the contraction that produces
node
, unlikeget_inds
the characters are mapped into [azAZ], for compatibility withnumpy.einsum
for example.
 get_centrality(node)#
 total_flops(dtype='float')#
Sum the flops contribution from every node in the tree.
 Parameters:
dtype ({'float', 'complex', None}, optional) – Scale the answer depending on the assumed data type.
 total_write()#
Sum the total amount of memory that will be created and operated on.
 total_cost(factor=DEFAULT_COMBO_FACTOR, combine=sum)#
 max_size()#
The size of the largest intermediate tensor.
 peak_size(order=None)#
Get the peak concurrent size of tensors needed  this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree.
 total_size()#
Get the total sum of all intermediate tensor sizes, this is relevant when, for example, using autodiff on a contraction without checkpointing.
 arithmetic_intensity()#
The ratio of total flops to total write  the higher the better for extracting good computational performance.
 contraction_cost()#
Get the total number of scalar operations ~ time complexity.
 contraction_width()#
Get log2 of the size of the largest tensor.
 compressed_contract_stats(chi, order='surface_order', compress_late=True)#
 max_size_compressed(chi, order='surface_order', compress_late=True)#
Compute the maximum sized tensor produced when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is close to the ideal space complexity if only tensors that are being directly operated on are kept in memory.
 peak_size_compressed(chi, order='surface_order', compress_late=True, accel='auto')#
Compute the peak size of combined intermediate tensors when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is the practical space complexity if one is not swapping intermediates in and out of memory.
 total_size_compressed(chi, order='surface_order', compress_late=True, accel='auto')#
Compute the total size of all intermediate tensors when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is relevant maybe for time complexity and e.g. autodiff space complexity (since every intermediate is kept).
 contract_nodes_pair(x, y, check=False)#
Contract node
x
with nodey
in the tree to create a new parent node.
 contract_nodes(nodes, optimize='autohq', check=False, extra_opts=None)#
Contract an arbitrary number of
nodes
in the tree to build up a subtree. The root of this subtree (a new intermediate) is returned.
 is_complete()#
Check every node has two children, unless it is a leaf.
 get_default_order()#
 _traverse_ordered(order)#
Traverse the tree in the order that minimizes
order(node)
, but still contrained to produce children before parents.
 traverse(order=None)#
Generate, in order, all the node merges in this tree. Nonrecursive! This ensures children are always visited before their parent.
 Parameters:
order (None or callable, optional) – How to order the contractions within the tree. If a callable is given (which should take a node as its argument), try to contract nodes that minimize this function first.
 Returns:
The bottom up ordered sequence of tree merges, each a tuple of
(parent, left_child, right_child)
. Return type:
generator[tuple[node]]
See also
 descend(mode='dfs')#
Generate, from root to leaves, all the node merges in this tree. Nonrecursive! This ensures parents are visited before their children.
 Parameters:
mode ({'dfs', bfs}, optional) – How expand from a parent.
 Returns:
The top down ordered sequence of tree merges, each a tuple of
(parent, left_child, right_child)
. Return type:
generator[tuple[node]
See also
 get_subtree(node, size, search='bfs')#
Get a subtree spanning down from
node
which will havesize
leaves (themselves not necessarily leaves of the actual tree). Parameters:
node (node) – The node of the tree to start with.
size (int) – How many subtree leaves to aim for.
search ({'bfs', 'dfs', 'random'}, optional) –
How to build the tree:
’bfs’: breadth first expansion
’dfs’: depth first expansion (largest nodes first)
’random’: random expansion
 Returns:
sub_leaves (tuple[node]) – Nodes which are subtree leaves.
branches (tuple[node]) – Nodes which are between the subtree leaves and root.
 remove_ind(ind, inplace=False)#
Remove (i.e. slice) index
ind
from this contraction tree, taking care to update all relevant information about each node.
 calc_subtree_candidates(pwr=2, what='flops')#
 subtree_reconfigure(subtree_size=8, subtree_search='bfs', weight_what='flops', weight_pwr=2, select='max', maxiter=500, seed=None, minimize='flops', optimize=None, inplace=False, progbar=False)#
Reconfigure subtrees of this tree with locally optimal paths.
 Parameters:
subtree_size (int, optional) – The size of subtree to consider. Cost is exponential in this.
subtree_search ({'bfs', 'dfs', 'random'}, optional) –
How to build the subtrees:
’bfs’: breadthfirstsearch creating balanced subtrees
’dfs’: depthfirstsearch creating imbalanced subtrees
’random’: random subtree building
weight_what ({'flops', 'size'}, optional) – When assessing nodes to build and optimize subtrees from whether to score them by the (local) contraction cost, or tensor size.
weight_pwr (int, optional) – When assessing nodes to build and optimize subtrees from, how to scale their score into a probability:
score**(1 / weight_pwr)
. The larger this is the more explorative the algorithm is whenselect='random'
.select ({'max', 'min', 'random'}, optional) –
What order to select node subtrees to optimize:
’max’: choose the highest score first
’min’: choose the lowest score first
’random’: choose randomly weighted on score – see
weight_pwr
.
maxiter (int, optional) – How many subtree optimizations to perform, the algorithm can terminate before this if all subtrees have been optimized.
seed (int, optional) – A random seed (seeds python system random module).
minimize ({'flops', 'size'}, optional) – Whether to minimize with respect to contraction flops or size.
inplace (bool, optional) – Whether to perform the reconfiguration inplace or not.
progbar (bool, optional) – Whether to show live progress of the reconfiguration.
 Return type:
 subtree_reconfigure_forest(num_trees=8, num_restarts=10, restart_fraction=0.5, subtree_maxiter=100, subtree_size=10, subtree_search=('random', 'bfs'), subtree_select=('random',), subtree_weight_what=('flops', 'size'), subtree_weight_pwr=(2,), parallel='auto', parallel_maxiter_steps=4, minimize='flops', progbar=False, inplace=False)#
‘Forested’ version of
subtree_reconfigure
which is more explorative and can be parallelized. It stochastically generates a ‘forest’ reconfigured trees, then only keeps some fraction of these to generate the next forest. Parameters:
num_trees (int, optional) – The number of trees to reconfigure at each stage.
num_restarts (int, optional) – The number of times to halt, prune and then restart the tree reconfigurations.
restart_fraction (float, optional) – The fraction of trees to keep at each stage and generate the next forest from.
subtree_maxiter (int, optional) – Number of subtree reconfigurations per step.
num_restarts * subtree_maxiter
is the max number of total subtree reconfigurations for the final tree produced.subtree_size (int, optional) – The size of subtrees to search for and reconfigure.
subtree_search (tuple[{'random', 'bfs', 'dfs'}], optional) – Tuple of options for the
search
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_select (tuple[{'random', 'max', 'min'}], optional) – Tuple of options for the
select
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_weight_what (tuple[{'flops', 'size'}], optional) – Tuple of options for the
weight_what
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_weight_pwr (tuple[int], optional) – Tuple of options for the
weight_pwr
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
parallel_maxiter_steps (int, optional) – If parallelizing, how many steps to break each reconfiguration into in order to evenly saturate many processes.
minimize ({'flops', 'size'}, optional) – Whether to minimize the total flops or maximum size of the contraction tree.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the subtree reconfiguration inplace.
 Return type:
 slice(target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize='flops', allow_outer=True, max_repeats=16, inplace=False)#
Slice this tree (turn some indices into indices which are explicitly summed over rather than being part of contractions). The indices are stored in
tree.sliced_inds
, and the contraction width updated to take account of the slicing. Callingtree.contract(arrays)
moreover which automatically perform the slicing and summation. Parameters:
target_size (int, optional) – The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached.
target_slices (int, optional) – The target or minimum number of ‘slices’ to consider  individual contractions after slicing indices. The search algorithm will terminate after this is breached.
target_overhead (float, optional) – The target increase in total number of floating point operations. For example, a value of
2.0
will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once.temperature (float, optional) – How much to randomize the repeated search.
minimize ({'flops', 'size', ...}, optional) – Which metric to score the overhead increase against.
allow_outer (bool, optional) – Whether to allow slicing of outer indices.
max_repeats (int, optional) – How many times to repeat the search with a slight randomization.
inplace (bool, optional) – Whether the remove the indices from this tree inplace or not.
 Return type:
See also
SliceFinder
,ContractionTree.slice_and_reconfigure
 slice_and_reconfigure(target_size, step_size=2, temperature=0.01, minimize='flops', allow_outer=True, max_repeats=16, reconf_opts=None, progbar=False, inplace=False)#
Interleave slicing (removing indices into an exterior sum) with subtree reconfiguration to minimize the overhead induced by this slicing.
 Parameters:
target_size (int) – Slice the tree until the maximum intermediate size is this or smaller.
step_size (int, optional) – The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration.
temperature (float, optional) – The temperature to supply to
SliceFinder
for searching for indices.max_repeats (int, optional) – The number of slicing attempts to perform per search.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the slicing and reconfiguration inplace.
reconf_opts (None or dict, optional) – Supplied to
ContractionTree.subtree_reconfigure()
orContractionTree.subtree_reconfigure_forest()
, depending on ‘forested’ key value.
 slice_and_reconfigure_forest(target_size, step_size=2, num_trees=8, restart_fraction=0.5, temperature=0.02, max_repeats=32, minimize='flops', allow_outer=True, parallel='auto', progbar=False, inplace=False, reconf_opts=None)#
‘Forested’ version of
ContractionTree.slice_and_reconfigure()
. This maintains a ‘forest’ of trees with different slicing and subtree reconfiguration attempts, pruning the worst at each step and generating a new forest from the best. Parameters:
target_size (int) – Slice the tree until the maximum intermediate size is this or smaller.
step_size (int, optional) – The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration.
num_restarts (int, optional) – The number of times to halt, prune and then restart the tree reconfigurations.
restart_fraction (float, optional) – The fraction of trees to keep at each stage and generate the next forest from.
temperature (float, optional) – The temperature at which to randomize the sliced index search.
max_repeats (int, optional) – The number of slicing attempts to perform per search.
parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the slicing and reconfiguration inplace.
reconf_opts (None or dict, optional) – Supplied to
ContractionTree.slice_and_reconfigure()
.
 Return type:
 compressed_reconfigure(chi, minimize='peak', order_only=False, max_nodes='auto', max_time=None, local_score=None, exploration_power=0, best_score=None, progbar=False, inplace=False)#
Reconfigure this tree according to
peak_size_compressed
. Parameters:
chi (int) – The maximum bond dimension to consider.
order_only (bool, optional) – Whether to only consider the ordering of the current tree contractions, or all possible contractions, starting with the current.
max_nodes (int, optional) – Set the maximum number of contraction steps to consider.
max_time (float, optional) – Set the maximum time to spend on the search.
local_score (callable, optional) –
A function that assigns a score to a potential contraction, with a lower score giving more priority to explore that contraction earlier. It should have signature:
local_score(step, new_score, dsize, new_size)
where
step
is the number of steps so far,new_score
is the score of the contraction so far,dsize
is the change in memory by the current step, andnew_size
is the new memory size after contraction.exploration_power (float, optional) – If not
0.0
, the inverse power to which the step is raised in the default local score function. Higher values favor exploring more promising branches early on  at the cost of increased memory. Ignored iflocal_score
is supplied.best_score (float, optional) – Manually specify an upper bound for best score found so far.
progbar (bool, optional) – If
True
, display a progress bar.inplace (bool, optional) – Whether to perform the reconfiguration inplace on this tree.
 Return type:
 flat_tree(order=None)#
Create a nested tuple representation of the contraction tree like:
((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9))))
Such that the contraction will progress like:
((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) ((0, 12), (34, ((5, 67), 89))) (012, (34, (567, 89))) (012, (34, 56789)) (012, 3456789) 0123456789
Where each integer represents a leaf (i.e. single element node).
 get_leaves_ordered()#
Return the list of leaves as ordered by the contraction tree.
 Return type:
tuple[frozenset[str]]
 get_path(order=None)#
Generate a standard path from the contraction tree.
 get_numpy_path(order=None)#
Generate a path compatible with the optimize kwarg of numpy.einsum.
 get_ssa_path(order=None)#
Generate a ssa path from the contraction tree.
 surface_order(node)#
 set_surface_order_from_path(ssa_path)#
 get_path_surface()#
 get_ssa_path_surface()#
 get_spans()#
Get all (which could mean none) potential embeddings of this contraction tree into a spanning tree of the original graph.
 Return type:
tuple[dict[frozenset[int], frozenset[int]]]
 compute_centralities(combine='mean')#
Compute a centrality for every node in this contraction tree.
 get_hypergraph(accel=False)#
Get a hypergraph representing the uncontracted network (i.e. the leaves).
 reset_contraction_indices()#
Reset all information regarding the explicit contraction indices ordering.
 sort_contraction_indices(priority='flops', make_output_contig=True, make_contracted_contig=True, reset=True)#
Set explicit orders for the contraction indices of this self to optimize for one of two things: contiguity in contracted (‘k’) indices, or contiguity of left and right output (‘m’ and ‘n’) indices.
 Parameters:
priority ({'flops', 'size', 'root', 'leaves'}, optional) – Which order to process the intermediate nodes in. Later nodes resort previous nodes so are more likely to keep their ordering. E.g. for ‘flops’ the mostly costly contracton will be process last and thus will be guaranteed to have its indices exactly sorted.
make_output_contig (bool, optional) – When processing a pairwise contraction, sort the parent contraction indices so that the order of indices is the order they appear from left to right in the two child (input) tensors.
make_contracted_contig (bool, optional) – When processing a pairwise contraction, sort the child (input) tensor indices so that all contracted indices appear contiguously.
reset (bool, optional) – Reset all indices to the default order before sorting.
 print_contractions(sort=None, show_brackets=True)#
Print each pairwise contraction, with colorized indices (if colorama is installed), and other information.
 _contract_core(arrays, order=None, prefer_einsum=False, strip_exponent=False, backend=None, check=False, progbar=False)#
 contract_core(arrays, order=None, prefer_einsum=False, strip_exponent=False, backend=None, autojit=False, check=False, progbar=False)#
Contract
arrays
with this tree. The order of the axes and output is assumed to be that oftree.inputs
andtree.output
, but with sliced indices removed. This functon contracts the core tree and thus if indices have been sliced the arrays supplied need to be sliced as well. Parameters:
arrays (sequence of array) – The arrays to contract.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.backend (str, optional) – What library to use for
einsum
andtranspose
, will be automatically inferred from the arrays if not given.autojit (bool, optional) – Whether to use
autoray.autojit
to jit compile the expression.check (bool, optional) – Perform some basic error checks.
progbar (bool, optional) – Show progress through the contraction.
 slice_arrays(arrays, i)#
Take
arrays
and slice the relevant inputs according totree.sliced_inds
and the dynary representation ofi
.
 contract_slice(arrays, i, **kwargs)#
Get slices
i
ofarrays
and then contract them.
 gather_slices(slices, backend=None, progbar=False)#
Gather all the output contracted slices into a single full result. If none of the sliced indices appear in the output, then this is a simple sum  otherwise the slices need to be partially summed and partially stacked.
 gen_output_chunks(arrays, **contract_opts)#
Generate each output chunk of the contraction  i.e. take care of summing internally sliced indices only first. This assumes that the
sliced_inds
are sorted by whether they appear in the output or not (the default order). Useful for performing some kind of reduction over the final tensor object likefn(x).sum()
without constructing the entire thing.
 contract(arrays, order=None, prefer_einsum=False, strip_exponent=False, backend=None, autojit=False, check=False, progbar=False)#
Contract
arrays
with this tree. This function takes unsliced arrays and handles the slicing, contractions and gathering. The order of the axes and output is assumed to match that oftree.inputs
andtree.output
. Parameters:
arrays (sequence of array) – The arrays to contract.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.strip_exponent (bool, optional) – If
True
, eagerly strip the exponent (in log10) from intermediate tensors to control numerical problems from leaving the range of the datatype. This method then returns the scaled ‘mantissa’ output array and the exponent separately.backend (str, optional) – What library to use for
tensordot
,einsum
andtranspose
, it will be automatically inferred from the input arrays if not given.autojit (bool, optional) – Whether to use the ‘autojit’ feature of autoray to compile the contraction expression.
check (bool, optional) – Perform some basic error checks.
progbar (bool, optional) – Whether to show a progress bar.
 Returns:
output (array) – The contracted output, it will be scaled if
strip_exponent==True
.exponent (float) – The exponent of the output in base 10, returned only if
strip_exponent==True
.
See also
 contract_mpi(arrays, comm=None, root=None, **kwargs)#
Contract the slices of this tree and sum them in parallel  assuming we are already running under MPI.
 Parameters:
arrays (sequence of array) – The input (unsliced arrays)
comm (None or mpi4py communicator) – Defaults to
mpi4py.MPI.COMM_WORLD
if not given.root (None or int, optional) – If
root=None
, anAllreduce
will be performed such that every process has the resulting tensor, else if an integer e.g.root=0
, the result will be exclusively gathered to that process usingReduce
, with every other process returningNone
.kwargs – Supplied to
contract_slice()
.
 plot_hypergraph(**kwargs)#
 __repr__()#
Return repr(self).
 cotengra.core._reconfigure_tree(tree, *args, **kwargs)#
 cotengra.core._slice_and_reconfigure_tree(tree, *args, **kwargs)#
 cotengra.core._get_tree_info(tree)#
 cotengra.core.score_flops(trial)#
 cotengra.core.score_write(trial)#
 cotengra.core.score_size(trial)#
 cotengra.core.score_combo(trial, factor=DEFAULT_COMBO_FACTOR)#
 cotengra.core.score_limit(trial, factor=DEFAULT_COMBO_FACTOR)#
 cotengra.core.score_size_compressed(trial, chi='auto')#
 cotengra.core.score_peak_size_compressed(trial, chi='auto')#
 cotengra.core.score_write_compressed(trial, chi='auto')#
 cotengra.core.score_flops_compressed(trial, chi='auto')#
 cotengra.core.score_combo_compressed(trial, chi='auto', factor=DEFAULT_COMBO_FACTOR)#
 cotengra.core.score_matcher#
 cotengra.core.parse_minimize(minimize)#
 cotengra.core.get_score_fn(minimize)#
 cotengra.core._describe_tree(tree)#
 class cotengra.core.CompressedStatsTracker(hg)#
 __slots__ = ['flops', 'max_size', 'peak_size', 'write', 'current_size', 'last_size_change']#
 copy()#
 update_peak_size_pre_compress(hg, i, j)#
Subtract tensors size and also their neighbors size (since both will change with compression).
 update_flops_pre_contract(hg, i, j)#
Compute the flops just of the contraction.
 update_peak_size_post_contract(hg, ij)#
Add new tensors size and also its neighbors (since these will have changed with compression).
 update_size_post_contract(hg, ij)#
Compute the size just of the new tensor.
 update_pre_compress(hg, i, j)#
Default method computes everything.
 update_pre_contract(hg, i, j)#
Default method computes everything.
 update_post_contract(hg, ij)#
Default method computes everything.
 property score#
 __repr__()#
Return repr(self).
 class cotengra.core.CompressedStatsTrackerSize(hg)#
Bases:
CompressedStatsTracker
 property score#
 class cotengra.core.CompressedStatsTrackerPeak(hg)#
Bases:
CompressedStatsTracker
 property score#
 class cotengra.core.CompressedStatsTrackerWrite(hg)#
Bases:
CompressedStatsTracker
 property score#
 class cotengra.core.CompressedStatsTrackerFlops(hg)#
Bases:
CompressedStatsTracker
 property score#
 class cotengra.core.CompressedStatsTrackerCombo(hg)#
Bases:
CompressedStatsTracker
 factor#
 property score#
 cotengra.core._trackers#
 cotengra.core.get_compressed_stats_tracker(minimize)#
Get the correct
CompressedStatsTracker
class for the givenminimize
target.
 class cotengra.core.ContractionTreeCompressed(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False)#
Bases:
ContractionTree
A contraction tree for compressed contractions. Currently the only difference is that this defaults to the ‘surface’ traversal ordering.
 classmethod from_path(inputs, output, size_dict, *, path=None, ssa_path=None, check=False, **kwargs)#
Create a (completed)
ContractionTreeCompressed
from the usual inputs plus a standard contraction path or ‘ssa_path’  you need to supply one. This also set the default ‘surface’ traversal ordering to be the initial path.
 get_default_order()#
 class cotengra.core.ContractionTreeMulti(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False)#
Bases:
ContractionTree
Binary tree representing a tensor network contraction.
 Parameters:
inputs (sequence of str) – The list of input tensor’s indices.
output (str) – The output indices.
size_dict (dict[str, int]) – The size of each index.
track_childless (bool, optional) – Whether to dynamically keep track of which nodes are childless. Useful if you are ‘divisively’ building the tree.
track_flops (bool, optional) – Whether to dynamically keep track of the total number of flops. If
False
You can still compute this once the tree is complete.track_size (bool, optional) – Whether to dynamically keep track of the largest tensor so far. If
False
You can still compute this once the tree is complete.
 children#
Mapping of each node to two children.
 Type:
dict[node, tuple[node]
 info#
Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective ‘leg’ indices, size, flops of formation etc.
 Type:
dict[node, dict]
 set_varmults(varmults)#
 get_varmults()#
 set_numconfigs(numconfigs)#
 get_numconfigs()#
 class cotengra.core.PartitionTreeBuilder(partition_fn)#
Function wrapper that takes a function that partitions graphs and uses it to build a contraction tree.
partition_fn
should have signature: def partition_fn(inputs, output, size_dict,
weight_nodes, weight_edges, **kwargs):
… return membership
Where
weight_nodes
andweight_edges
decsribe how to weight the nodes and edges of the graph respectively andmembership
should be a list of integers of lengthlen(inputs)
labelling which partition each input node should be put it. build_divide(inputs, output, size_dict, random_strength=0.01, cutoff=10, parts=2, parts_decay=0.5, sub_optimize='auto', super_optimize='autohq', check=False, **partition_opts)#
 build_agglom(inputs, output, size_dict, random_strength=0.01, groupsize=4, check=False, sub_optimize='greedy', **partition_opts)#
 trial_fn(inputs, output, size_dict, **partition_opts)#
 trial_fn_agglom(inputs, output, size_dict, **partition_opts)#
 cotengra.core.calc_edge_weight(ix, size_dict, scale='log')#
 cotengra.core.calc_edge_weight_float(ix, size_dict, scale='log')#
 cotengra.core.calc_node_weight(term, size_dict, scale='linear')#
 cotengra.core.calc_node_weight_float(term, size_dict, scale='linear')#
 cotengra.core.jitter(x, strength)#
 cotengra.core.jitter_dict(d, strength)#
 cotengra.core.separate(xs, blocks)#
Partition
xs
inton
different list based on the corresponding labels inblocks
.
 class cotengra.core.LineGraph(inputs, output)#
Very simple linegraph builder and file writer.
 to_gr_str()#
 to_gr_file(fname)#
 to_cnf_str()#
 to_cnf_file(fname)#
 cotengra.core.popcount(x)#
 cotengra.core.dict_affine_renorm(d)#
 class cotengra.core.HyperGraph(inputs, output=None, size_dict=None)#
Simple hypergraph builder and writer.
 Parameters:
inputs (sequence of list[str] or dict[int, list[str]]) – The nodes. If given as a dict, the keys will be taken as the node enumeration rather than
range(len(inputs))
.output (str, optional) – Output indices.
size_dict (dict[str, int], optional) – Size of each index.
 nodes#
Mapping of node to the list of edges incident to it.
 Type:
dict[int, list[str]]
 edges#
Mapping of hyper edges to list of nodes it is incident to.
 Type:
dict[str, list[int]]
 num_nodes#
The number of nodes.
 Type:
int
 num_edges#
The number of hyperedges.
 Type:
int
 __slots__ = ['inputs', 'output', 'size_dict', 'nodes', 'edges', 'compressed', 'node_counter']#
 plot#
 copy()#
Copy this
HyperGraph
.
 classmethod from_edges(edges, output=(), size_dict=())#
 get_num_nodes()#
 property num_nodes#
 get_num_edges()#
 property num_edges#
 __len__()#
 edges_size(es)#
Get the combined, i.e. product, size of all edges in
es
.
 bond_size(i, j)#
Get the combined, i.e. product, size of edges shared by nodes
i
andj
.
 node_size(i)#
Get the size of the term represented by node
i
.
 neighborhood_size(nodes)#
Get the size of nodes in the immediate neighborhood of
nodes
.
 contract_pair_cost(i, j)#
Get the cost of contracting nodes
i
andj
 the product of the dimensions of the indices involved.
 total_node_size()#
Get the total size of all nodes.
 output_nodes()#
Get the nodes with output indices.
 neighbors(i)#
Get the neighbors of node
i
.
 neighbor_edges(i)#
Get the edges incident to all neighbors of node
i
, (including its own edges).
 has_node(i)#
Does this hypergraph have node
i
?
 get_node(i)#
Get the edges node
i
is incident to.
 get_edge(e)#
Get the nodes edge
e
is incident to.
 has_edge(e)#
Does this hypergraph have edge
e
?
 next_node()#
Get the next available node identifier.
 add_node(inds, node=None)#
Add a node with
inds
, and optional identifiernode
. The identifier will be generated if not given and returned.
 remove_node(i)#
Remove node
i
from this hypergraph.
 remove_edge(e)#
Remove edge
e
from this hypergraph.
 contract(i, j, node=None)#
Combine node
i
and nodej
.
 compress(chi, edges=None)#
‘Compress’ multiedges, combining their size up to a maximum of
chi
.
 candidate_contraction_size(i, j, chi=None)#
Get the size of the node created if
i
andj
were contracted, optionally including the effect of first compressing bonds to sizechi
.
 simple_distance(region, p=2)#
Compute a simple distance metric from nodes in
region
to all others. Unlike graph distance, relative connectedness is taken into account.
 simple_closeness(p=0.75, mu=0.5)#
Compute a rough hypergraph ‘closeness’.
 Parameters:
p (float, optional) – Once any node has had
H.num_nodes**p
visitors terminate. Set greater than 1.0 for no limit (slower).mu (float, optional) – Let the visitor score decay with this power. The higher this is, the more local connectivity is favored.
 Returns:
scores – The simple hypergraph closenesses  higher being more central.
 Return type:
dict[int, float]
 simple_centrality(r=None, smoothness=2, **closeness_opts)#
A simple algorithm for large hypergraph centrality. First we find a rough closeness centrality, then relax / smooth this by nodes iteratively radiating their centrality to their neighbors.
 Parameters:
r (None or int, optional) – Number of iterations. Defaults to
max(10, int(self.num_nodes**0.5))
.smoothness (float, optional) – The smoothness. In conjunction with a high value of
r
this will create a smooth gradient from one of the hypergraph to the other.closeness_opts – Supplied to
HyperGraph.simple_closeness
as the starting point.
 Return type:
dict[int, float]
 compute_loops(max_loop_length=None)#
Generate all loops up to a certain length in this hypergraph.
 Parameters:
max_loop_length (None or int, optional) – The maximum loop length to search for. If
None
, then this is set automatically by the length of the first loop found. Yields:
loop (tuple[int]) – A set of nodes that form a loop.
 get_laplacian()#
Get the graph Laplacian.
 get_resistance_distances()#
Get the resistance distance between all nodes of the raw graph.
 resistance_centrality(rescale=True)#
Compute the centrality in terms of the total resistance distance to all other nodes.
 to_networkx(as_tree_leaves=False)#
Convert to a networkx Graph, with hyperedges represented as nodes.
 Parameters:
as_tree_leaves (bool, optional) – If true, then the nodes are converted to ‘tree leaf’ form, i.e. map node
i
tofrozenset([i])
, to match the nodes in aContractionTree
.
 compute_weights(weight_edges='const', weight_nodes='const')#
 __repr__()#
Return repr(self).
 cotengra.core.get_hypergraph(inputs, output=None, size_dict=None, accel=False)#
Single entrypoint for creating a, possibly accelerated, HyperGraph.