cotengra¶
Hyper optimized contraction trees for large tensor networks and einsums.
Submodules¶
- cotengra._version
- cotengra.contract
- cotengra.core
- cotengra.core_multi
- cotengra.experimental
- cotengra.hypergraph
- cotengra.hyperoptimizers
- cotengra.interface
- cotengra.oe
- cotengra.parallel
- cotengra.pathfinders
- cotengra.plot
- cotengra.presets
- cotengra.schematic
- cotengra.scoring
- cotengra.slicer
- cotengra.utils
Attributes¶
Does no gaussian process tuning by default, just randomly samples - requires |
|
Does no gaussian process tuning by default, just randomly samples but in a |
|
Alias for |
|
Alias for |
Classes¶
Binary tree representing a tensor network contraction. |
|
A contraction tree for compressed contractions. Currently the only |
|
Binary tree representing a tensor network contraction. |
|
Simple hypergraph builder and writer. |
|
A compressed contraction path optimizer that samples a series of ordered |
|
A path optimizer that samples a series of contraction trees |
|
A path optimizer that samples a series of contraction trees |
|
Like |
|
Like |
|
Class interface to the greedy optimizer which can be instantiated with |
|
Class interface to the optimal optimizer which can be instantiated with |
|
Lightweight random greedy optimizer, that eschews hyper parameter |
|
A fully random pathfinder, that randomly selects pairs of tensors to |
|
An optimizer that automatically chooses between optimal and |
|
An optimizer that automatically chooses between optimal and |
|
An object to help find the best indices to slice over in order to reduce |
Functions¶
|
Single entry-point for creating a, possibly accelerated, HyperGraph. |
|
Compute a hash for a particular contraction geometry. |
Return a list of currently registered hyper contraction finders. |
|
|
Perform the tensor contraction specified by |
|
Get an callable 'expression' that will contract tensors with indices and |
|
Find only the contraction path for the specific contraction, with fast |
|
Get the |
|
Perform an einsum contraction, using cotengra, using strategy given by |
|
Get an callable 'expression' that will contract tensors with shapes |
|
Get the ContractionTree for the einsum equation |
|
Perform a contraction specified by the ncon style indices, using |
|
Register a preset optimizer. |
|
|
|
|
|
|
|
|
|
|
|
Plot the trials total flops vs max size. |
|
|
|
|
|
Plot a contraction tree using matplotlib. |
|
|
|
|
|
|
|
|
|
Plot the trials interactively using altair. |
|
Get the symbol corresponding to int |
|
Get a mapping of arbitrary hashable 'indices' to single unicode symbols, |
|
Package Contents¶
- class cotengra.ContractionTree(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None)[source]¶
Binary tree representing a tensor network contraction.
- Parameters:
inputs (sequence of str) – The list of input tensor’s indices.
output (str) – The output indices.
size_dict (dict[str, int]) – The size of each index.
track_childless (bool, optional) – Whether to dynamically keep track of which nodes are childless. Useful if you are ‘divisively’ building the tree.
track_flops (bool, optional) – Whether to dynamically keep track of the total number of flops. If
False
You can still compute this once the tree is complete.track_write (bool, optional) – Whether to dynamically keep track of the total number of elements written. If
False
You can still compute this once the tree is complete.track_size (bool, optional) – Whether to dynamically keep track of the largest tensor so far. If
False
You can still compute this once the tree is complete.objective (str or Objective, optional) – An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed.
- children¶
Mapping of each node to two children.
- Type:
dict[node, tuple[node]]
- info¶
Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective ‘leg’ indices, size, flops of formation etc.
- Type:
dict[node, dict]
- inputs¶
- output¶
- N¶
- appearances¶
- preprocessing¶
- children¶
- info¶
- root¶
- track_childless = False¶
- _track_flops = False¶
- _track_write = False¶
- _track_size = False¶
- already_optimized¶
- multiplicity = 1¶
- sliced_inds¶
- sliced_inputs¶
- contraction_cores¶
- _default_objective = None¶
- property nslices¶
Simple alias for how many independent contractions this tree represents overall.
- property nchunks¶
The number of ‘chunks’ - determined by the number of sliced output indices.
- node_to_terms(node)[source]¶
Turn a node – a frozen set of ints – into the corresponding terms – a sequence of sets of str corresponding to input indices.
- gen_leaves()[source]¶
Generate the nodes representing leaves of the contraction tree, i.e. of size 1 each corresponding to a single input tensor.
- get_incomplete_nodes()[source]¶
Get the set of current nodes that have no children and the set of nodes that have no parents. These are the ‘childless’ and ‘parentless’ nodes respectively, that need to be contracted to complete the tree. The parentless nodes are grouped into the childless nodes that contain them as subgraphs.
- Returns:
groups – A mapping of childless nodes to the list of parentless nodes are beneath them.
- Return type:
dict[frozenet[int], list[frozenset[int]]]
See also
- autocomplete(**contract_opts)[source]¶
Contract all remaining node groups (as computed by
tree.get_incomplete_nodes
) in the tree to complete it.- Parameters:
contract_opts – Options to pass to
tree.contract_nodes
.
See also
- classmethod from_path(inputs, output, size_dict, *, path=None, ssa_path=None, autocomplete='auto', check=False, **kwargs)[source]¶
Create a (completed)
ContractionTree
from the usual inputs plus a standard contraction path or ‘ssa_path’ - you need to supply one.- Parameters:
inputs (Sequence[Sequence[str]]) – The input indices of each tensor, as single unicode characters.
output (Sequence[str]) – The output indices.
size_dict (dict[str, int]) – The size of each index.
path (Sequence[Sequence[int]], optional) – The contraction path, a sequence of pairs of tensor ids to contract. The ids are linear indices into the list of temporary tensors, which are recycled as each contraction pops a pair and appends the result. This or
ssa_path
must be supplied.ssa_path (Sequence[Sequence[int]], optional) – The contraction path, a sequence of pairs of indices to contract. The indices are single use, as if the result of each contraction is appended to the end of the list of temporary tensors without popping. This or
path
must be supplied.autocomplete ("auto" or bool, optional) – Whether to automatically complete the path, i.e. contract all remaining nodes. If “auto” then a warning is issued if the path is not complete.
check (bool, optional) – Whether to perform some basic checks while creating the contraction nodes.
- Return type:
- classmethod from_info(info, **kwargs)[source]¶
Create a
ContractionTree
from anopt_einsum.PathInfo
object.
- classmethod from_eq(eq, size_dict, **kwargs)[source]¶
Create a empty
ContractionTree
directly from an equation and set of shapes.- Parameters:
eq (str) – The einsum string equation.
size_dict (dict[str, int]) – The size of each index.
- get_eq()[source]¶
Get the einsum equation corresponding to this tree. Note that this is the total (or original) equation, so includes indices which have been sliced.
- Returns:
eq
- Return type:
str
- get_shapes()[source]¶
Get the shapes of the input tensors corresponding to this tree.
- Returns:
shapes
- Return type:
tuple[tuple[int]]
- get_inputs_sliced()[source]¶
Get the input indices corresponding to a single slice of this tree, i.e. with sliced indices removed.
- Returns:
inputs
- Return type:
tuple[tuple[str]]
- get_output_sliced()[source]¶
Get the output indices corresponding to a single slice of this tree, i.e. with sliced indices removed.
- Returns:
output
- Return type:
tuple[str]
- get_eq_sliced()[source]¶
Get the einsum equation corresponding to a single slice of this tree, i.e. with sliced indices removed.
- Returns:
eq
- Return type:
str
- get_shapes_sliced()[source]¶
Get the shapes of the input tensors corresponding to a single slice of this tree, i.e. with sliced indices removed.
- Returns:
shapes
- Return type:
tuple[tuple[int]]
- classmethod from_edge_path(edge_path, inputs, output, size_dict, check=False, **kwargs)[source]¶
Create a
ContractionTree
from an edge elimination ordering.
- _remove_node(node)[source]¶
Remove
node
from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing.
- compute_leaf_legs(i)[source]¶
Compute the effective ‘outer’ indices for the ith input tensor. This is not always simply the ith input indices, due to A) potential slicing and B) potential preprocessing.
- has_hyper_indices()[source]¶
Check if there are any ‘hyper’ indices in the contraction, i.e. indices that don’t appear exactly twice, when considering the inputs and output.
- get_can_dot(node)[source]¶
Get whether this contraction can be performed as a dot product (i.e. with
tensordot
), or else requireseinsum
, as it has indices that don’t appear exactly twice in either the inputs or the output.
- get_inds(node)[source]¶
Get the indices of this node - an ordered string version of
get_legs
that starts withtree.inputs
and maintains the order they appear in each contraction ‘ABC,abc->ABCabc’, to match tensordot.
- get_tensordot_axes(node)[source]¶
Get the
axes
arg for a tensordot ocontraction that producesnode
. The pairs are sorted in order of appearance on the left input.
- get_tensordot_perm(node)[source]¶
Get the permutation required, if any, to bring the tensordot output of this nodes contraction into line with
self.get_inds(node)
.
- get_einsum_eq(node)[source]¶
Get the einsum string describing the contraction that produces
node
, unlikeget_inds
the characters are mapped into [a-zA-Z], for compatibility withnumpy.einsum
for example.
- total_flops(dtype=None, log=None)[source]¶
Sum the flops contribution from every node in the tree.
- Parameters:
dtype ({'float', 'complex', None}, optional) – Scale the answer depending on the assumed data type.
- peak_size(order=None, log=None)[source]¶
Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree.
- contract_stats(force=False)[source]¶
Simulteneously compute the total flops, write and size of the contraction tree. This is more efficient than calling each of the individual methods separately. Once computed, each quantity is then automatically tracked.
- Returns:
stats – The total flops, write and size.
- Return type:
dict[str, int]
- arithmetic_intensity()[source]¶
The ratio of total flops to total write - the higher the better for extracting good computational performance.
- contraction_scaling()[source]¶
This is computed simply as the maximum number of indices involved in any single contraction, which will match the scaling assuming that all dimensions are equal.
- total_flops_compressed(chi=None, order='surface_order', compress_late=None, dtype=None, log=None)[source]¶
Estimate the total flops for a compressed contraction of this tree with maximum bond size
chi
. This includes basic estimates of the ops to perform contractions, QRs and SVDs.
- total_write_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None)[source]¶
Compute the total size of all intermediate tensors when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is relevant maybe for time complexity and e.g. autodiff space complexity (since every intermediate is kept).
- combo_cost_compressed(chi=None, order='surface_order', compress_late=None, factor=None, log=None)[source]¶
- max_size_compressed(chi=None, order='surface_order', compress_late=None, log=None)[source]¶
Compute the maximum sized tensor produced when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is close to the ideal space complexity if only tensors that are being directly operated on are kept in memory.
- peak_size_compressed(chi=None, order='surface_order', compress_late=None, accel='auto', log=None)[source]¶
Compute the peak size of combined intermediate tensors when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
. This is the practical space complexity if one is not swapping intermediates in and out of memory.
- contraction_width_compressed(chi=None, order='surface_order', compress_late=None, log=2)[source]¶
Compute log2 of the maximum sized tensor produced when a compressed contraction is performed with maximum bond size
chi
, ordered byorder
.
- contract_nodes_pair(x, y, legs=None, cost=None, size=None, check=False)[source]¶
Contract node
x
with nodey
in the tree to create a new parent node, which is returned.- Parameters:
x (frozenset[int]) – The first node to contract.
y (frozenset[int]) – The second node to contract.
legs (dict[str, int], optional) – The effective ‘legs’ of the new node if already known. If not given, this is computed from the inputs of
x
andy
.cost (int, optional) – The cost of the contraction if already known. If not given, this is computed from the inputs of
x
andy
.size (int, optional) – The size of the new node if already known. If not given, this is computed from the inputs of
x
andy
.check (bool, optional) – Whether to check the inputs are valid.
- Returns:
parent – The new parent node of
x
andy
.- Return type:
frozenset[int]
- contract_nodes(nodes, optimize='auto-hq', check=False, extra_opts=None)[source]¶
Contract an arbitrary number of
nodes
in the tree to build up a subtree. The root of this subtree (a new intermediate) is returned.
- _traverse_ordered(order)[source]¶
Traverse the tree in the order that minimizes
order(node)
, but still constrained to produce children before parents.
- traverse(order=None)[source]¶
Generate, in order, all the node merges in this tree. Non-recursive! This ensures children are always visited before their parent.
- Parameters:
order (None or callable, optional) – How to order the contractions within the tree. If a callable is given (which should take a node as its argument), try to contract nodes that minimize this function first.
- Returns:
The bottom up ordered sequence of tree merges, each a tuple of
(parent, left_child, right_child)
.- Return type:
generator[tuple[node]]
See also
- descend(mode='dfs')[source]¶
Generate, from root to leaves, all the node merges in this tree. Non-recursive! This ensures parents are visited before their children.
- Parameters:
mode ({'dfs', bfs}, optional) – How expand from a parent.
- Returns:
The top down ordered sequence of tree merges, each a tuple of
(parent, left_child, right_child)
.- Return type:
generator[tuple[node]
See also
- get_subtree(node, size, search='bfs', seed=None)[source]¶
Get a subtree spanning down from
node
which will havesize
leaves (themselves not necessarily leaves of the actual tree).- Parameters:
node (node) – The node of the tree to start with.
size (int) – How many subtree leaves to aim for.
search ({'bfs', 'dfs', 'random'}, optional) –
How to build the tree:
’bfs’: breadth first expansion
’dfs’: depth first expansion (largest nodes first)
’random’: random expansion
seed (None, int or random.Random, optional) – Random number generator seed, if
search
is ‘random’.
- Returns:
sub_leaves (tuple[node]) – Nodes which are subtree leaves.
branches (tuple[node]) – Nodes which are between the subtree leaves and root.
- remove_ind(ind, project=None, inplace=False)[source]¶
Remove (i.e. by default slice) index
ind
from this contraction tree, taking care to update all relevant information about each node.
- restore_ind(ind, inplace=False)[source]¶
Restore (unslice or un-project) index
ind
to this contraction tree, taking care to update all relevant information about each node.- Parameters:
ind (str) – The index to restore.
inplace (bool, optional) – Whether to perform the restoration inplace or not.
- Return type:
- unslice_rand(seed=None, inplace=False)[source]¶
Unslice (restore) a random index from this contraction tree.
- Parameters:
seed (None, int or random.Random, optional) – Random number generator seed.
inplace (bool, optional) – Whether to perform the unslicing inplace or not.
- Return type:
- unslice_all(inplace=False)[source]¶
Unslice (restore) all sliced indices from this contraction tree.
- Parameters:
inplace (bool, optional) – Whether to perform the unslicing inplace or not.
- Return type:
- subtree_reconfigure(subtree_size=8, subtree_search='bfs', weight_what='flops', weight_pwr=2, select='max', maxiter=500, seed=None, minimize=None, optimize=None, inplace=False, progbar=False)[source]¶
Reconfigure subtrees of this tree with locally optimal paths.
- Parameters:
subtree_size (int, optional) – The size of subtree to consider. Cost is exponential in this.
subtree_search ({'bfs', 'dfs', 'random'}, optional) –
How to build the subtrees:
’bfs’: breadth-first-search creating balanced subtrees
’dfs’: depth-first-search creating imbalanced subtrees
’random’: random subtree building
weight_what ({'flops', 'size'}, optional) – When assessing nodes to build and optimize subtrees from whether to score them by the (local) contraction cost, or tensor size.
weight_pwr (int, optional) – When assessing nodes to build and optimize subtrees from, how to scale their score into a probability:
score**(1 / weight_pwr)
. The larger this is the more explorative the algorithm is whenselect='random'
.select ({'max', 'min', 'random'}, optional) –
What order to select node subtrees to optimize:
’max’: choose the highest score first
’min’: choose the lowest score first
’random’: choose randomly weighted on score – see
weight_pwr
.
maxiter (int, optional) – How many subtree optimizations to perform, the algorithm can terminate before this if all subtrees have been optimized.
seed (int, optional) – A random seed (seeds python system random module).
minimize ({'flops', 'size'}, optional) – Whether to minimize with respect to contraction flops or size.
inplace (bool, optional) – Whether to perform the reconfiguration inplace or not.
progbar (bool, optional) – Whether to show live progress of the reconfiguration.
- Return type:
- subtree_reconfigure_forest(num_trees=8, num_restarts=10, restart_fraction=0.5, subtree_maxiter=100, subtree_size=10, subtree_search=('random', 'bfs'), subtree_select=('random',), subtree_weight_what=('flops', 'size'), subtree_weight_pwr=(2,), parallel='auto', parallel_maxiter_steps=4, minimize=None, seed=None, progbar=False, inplace=False)[source]¶
‘Forested’ version of
subtree_reconfigure
which is more explorative and can be parallelized. It stochastically generates a ‘forest’ reconfigured trees, then only keeps some fraction of these to generate the next forest.- Parameters:
num_trees (int, optional) – The number of trees to reconfigure at each stage.
num_restarts (int, optional) – The number of times to halt, prune and then restart the tree reconfigurations.
restart_fraction (float, optional) – The fraction of trees to keep at each stage and generate the next forest from.
subtree_maxiter (int, optional) – Number of subtree reconfigurations per step.
num_restarts * subtree_maxiter
is the max number of total subtree reconfigurations for the final tree produced.subtree_size (int, optional) – The size of subtrees to search for and reconfigure.
subtree_search (tuple[{'random', 'bfs', 'dfs'}], optional) – Tuple of options for the
search
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_select (tuple[{'random', 'max', 'min'}], optional) – Tuple of options for the
select
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_weight_what (tuple[{'flops', 'size'}], optional) – Tuple of options for the
weight_what
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.subtree_weight_pwr (tuple[int], optional) – Tuple of options for the
weight_pwr
kwarg ofContractionTree.subtree_reconfigure()
to randomly sample.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
parallel_maxiter_steps (int, optional) – If parallelizing, how many steps to break each reconfiguration into in order to evenly saturate many processes.
minimize ({'flops', 'size', ..., Objective}, optional) – Whether to minimize the total flops or maximum size of the contraction tree.
seed (None, int or random.Random, optional) – A random seed to use.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the subtree reconfiguration inplace.
- Return type:
- slice(target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, seed=None, inplace=False)[source]¶
Slice this tree (turn some indices into indices which are explicitly summed over rather than being part of contractions). The indices are stored in
tree.sliced_inds
, and the contraction width updated to take account of the slicing. Callingtree.contract(arrays)
moreover which automatically perform the slicing and summation.- Parameters:
target_size (int, optional) – The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached.
target_slices (int, optional) – The target or minimum number of ‘slices’ to consider - individual contractions after slicing indices. The search algorithm will terminate after this is breached. This is on top of the current number of slices.
target_overhead (float, optional) – The target increase in total number of floating point operations. For example, a value of
2.0
will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once.temperature (float, optional) – How much to randomize the repeated search.
minimize ({'flops', 'size', ..., Objective}, optional) – Which metric to score the overhead increase against.
allow_outer (bool, optional) – Whether to allow slicing of outer indices.
max_repeats (int, optional) – How many times to repeat the search with a slight randomization.
reslice (bool, optional) – Whether to reslice the tree, i.e. first remove all currently sliced indices and start the search again. Generally any ‘good’ sliced indices will be easily found again.
seed (None, int or random.Random, optional) – A random seed or generator to use for the search.
inplace (bool, optional) – Whether the remove the indices from this tree inplace or not.
- Return type:
- slice_and_reconfigure(target_size, step_size=2, temperature=0.01, minimize=None, allow_outer=True, max_repeats=16, reslice=False, reconf_opts=None, progbar=False, inplace=False)[source]¶
Interleave slicing (removing indices into an exterior sum) with subtree reconfiguration to minimize the overhead induced by this slicing.
- Parameters:
target_size (int) – Slice the tree until the maximum intermediate size is this or smaller.
step_size (int, optional) – The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration.
temperature (float, optional) – The temperature to supply to
SliceFinder
for searching for indices.minimize ({'flops', 'size', ..., Objective}, optional) – The metric to minimize when slicing and reconfiguring subtrees.
max_repeats (int, optional) – The number of slicing attempts to perform per search.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the slicing and reconfiguration inplace.
reconf_opts (None or dict, optional) – Supplied to
ContractionTree.subtree_reconfigure()
orContractionTree.subtree_reconfigure_forest()
, depending on ‘forested’ key value.
- slice_and_reconfigure_forest(target_size, step_size=2, num_trees=8, restart_fraction=0.5, temperature=0.02, max_repeats=32, reslice=False, minimize=None, allow_outer=True, parallel='auto', progbar=False, inplace=False, reconf_opts=None)[source]¶
‘Forested’ version of
ContractionTree.slice_and_reconfigure()
. This maintains a ‘forest’ of trees with different slicing and subtree reconfiguration attempts, pruning the worst at each step and generating a new forest from the best.- Parameters:
target_size (int) – Slice the tree until the maximum intermediate size is this or smaller.
step_size (int, optional) – The minimum size reduction to try and achieve before switching to a round of subtree reconfiguration.
num_restarts (int, optional) – The number of times to halt, prune and then restart the tree reconfigurations.
restart_fraction (float, optional) – The fraction of trees to keep at each stage and generate the next forest from.
temperature (float, optional) – The temperature at which to randomize the sliced index search.
max_repeats (int, optional) – The number of slicing attempts to perform per search.
parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
progbar (bool, optional) – Whether to show live progress.
inplace (bool, optional) – Whether to perform the slicing and reconfiguration inplace.
reconf_opts (None or dict, optional) – Supplied to
ContractionTree.slice_and_reconfigure()
.
- Return type:
- compressed_reconfigure(minimize=None, order_only=False, max_nodes='auto', max_time=None, local_score=None, exploration_power=0, best_score=None, progbar=False, inplace=False)[source]¶
Reconfigure this tree according to
peak_size_compressed
.- Parameters:
chi (int) – The maximum bond dimension to consider.
order_only (bool, optional) – Whether to only consider the ordering of the current tree contractions, or all possible contractions, starting with the current.
max_nodes (int, optional) – Set the maximum number of contraction steps to consider.
max_time (float, optional) – Set the maximum time to spend on the search.
local_score (callable, optional) –
A function that assigns a score to a potential contraction, with a lower score giving more priority to explore that contraction earlier. It should have signature:
local_score(step, new_score, dsize, new_size)
where
step
is the number of steps so far,new_score
is the score of the contraction so far,dsize
is the change in memory by the current step, andnew_size
is the new memory size after contraction.exploration_power (float, optional) – If not
0.0
, the inverse power to which the step is raised in the default local score function. Higher values favor exploring more promising branches early on - at the cost of increased memory. Ignored iflocal_score
is supplied.best_score (float, optional) – Manually specify an upper bound for best score found so far.
progbar (bool, optional) – If
True
, display a progress bar.inplace (bool, optional) – Whether to perform the reconfiguration inplace on this tree.
- Return type:
- windowed_reconfigure(minimize=None, order_only=False, window_size=20, max_iterations=100, max_window_tries=1000, score_temperature=0.0, queue_temperature=1.0, scorer=None, queue_scorer=None, seed=None, inplace=False, progbar=False, **kwargs)[source]¶
- flat_tree(order=None)[source]¶
Create a nested tuple representation of the contraction tree like:
((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9))))
Such that the contraction will progress like:
((0, (1, 2)), ((3, 4), ((5, (6, 7)), (8, 9)))) ((0, 12), (34, ((5, 67), 89))) (012, (34, (567, 89))) (012, (34, 56789)) (012, 3456789) 0123456789
Where each integer represents a leaf (i.e. single element node).
- get_leaves_ordered()[source]¶
Return the list of leaves as ordered by the contraction tree.
- Return type:
tuple[frozenset[str]]
- get_numpy_path(order=None)[source]¶
Generate a path compatible with the optimize kwarg of numpy.einsum.
- get_spans()[source]¶
Get all (which could mean none) potential embeddings of this contraction tree into a spanning tree of the original graph.
- Return type:
tuple[dict[frozenset[int], frozenset[int]]]
- compute_centralities(combine='mean')[source]¶
Compute a centrality for every node in this contraction tree.
- get_hypergraph(accel=False)[source]¶
Get a hypergraph representing the uncontracted network (i.e. the leaves).
- reset_contraction_indices()[source]¶
Reset all information regarding the explicit contraction indices ordering.
- sort_contraction_indices(priority='flops', make_output_contig=True, make_contracted_contig=True, reset=True)[source]¶
Set explicit orders for the contraction indices of this self to optimize for one of two things: contiguity in contracted (‘k’) indices, or contiguity of left and right output (‘m’ and ‘n’) indices.
- Parameters:
priority ({'flops', 'size', 'root', 'leaves'}, optional) – Which order to process the intermediate nodes in. Later nodes re-sort previous nodes so are more likely to keep their ordering. E.g. for ‘flops’ the mostly costly contracton will be process last and thus will be guaranteed to have its indices exactly sorted.
make_output_contig (bool, optional) – When processing a pairwise contraction, sort the parent contraction indices so that the order of indices is the order they appear from left to right in the two child (input) tensors.
make_contracted_contig (bool, optional) – When processing a pairwise contraction, sort the child (input) tensor indices so that all contracted indices appear contiguously.
reset (bool, optional) – Reset all indices to the default order before sorting.
- print_contractions(sort=None, show_brackets=True)[source]¶
Print each pairwise contraction, with colorized indices (if colorama is installed), and other information.
- get_contractor(order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, implementation=None, autojit=False, progbar=False)[source]¶
Get a reusable function which performs the contraction corresponding to this tree, cached.
- Parameters:
tree (ContractionTree) – The contraction tree.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
, the order in which to perform the pairwise contractions given by the tree.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.strip_exponent (bool, optional) – If
True
, the function will strip the exponent from the output array and return it separately.check_zero (bool, optional) – If
True
, whenstrip_exponent=True
, explicitly check for zero-valued intermediates that would otherwise producenan
, instead terminating early if encountered and returning(0.0, 0.0)
.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
”autoray”: dispatch with autoray, using the
tensordot
andeinsum
implementation of the backend.”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet providetensordot
andeinsum
to be used.”cuquantum”: use the cuquantum library to perform the whole contraction (not just individual contractions).
tuple[callable, callable]: manually supply the
tensordot
andeinsum
implementations to use.
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.progbar (bool, optional) – Whether to show progress through the contraction by default.
- Returns:
fn – The contraction function, with signature
fn(*arrays)
.- Return type:
callable
- contract_core(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False)[source]¶
Contract
arrays
with this tree. The order of the axes and output is assumed to be that oftree.inputs
andtree.output
, but with sliced indices removed. This functon contracts the core tree and thus if indices have been sliced the arrays supplied need to be sliced as well.- Parameters:
arrays (sequence of array) – The arrays to contract.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.backend (str, optional) – What library to use for
einsum
andtranspose
, will be automatically inferred from the arrays if not given.autojit ("auto" or bool, optional) – Whether to use
autoray.autojit
to jit compile the expression. If “auto”, then letcotengra
choose.progbar (bool, optional) – Show progress through the contraction.
- slice_key(i, strides=None)[source]¶
Get the combination of sliced index values for overall slice
i
.- Parameters:
i (int) – The overall slice index.
- Returns:
key – The value each sliced index takes for slice
i
.- Return type:
dict[str, int]
- slice_arrays(arrays, i)[source]¶
Take
arrays
and slice the relevant inputs according totree.sliced_inds
and the dynary representation ofi
.
- gather_slices(slices, backend=None, progbar=False)[source]¶
Gather all the output contracted slices into a single full result. If none of the sliced indices appear in the output, then this is a simple sum - otherwise the slices need to be partially summed and partially stacked.
- gen_output_chunks(arrays, with_key=False, progbar=False, **contract_opts)[source]¶
Generate each output chunk of the contraction - i.e. take care of summing internally sliced indices only first. This assumes that the
sliced_inds
are sorted by whether they appear in the output or not (the default order). Useful for performing some kind of reduction over the final tensor object likefn(x).sum()
without constructing the entire thing.- Parameters:
arrays (sequence of array) – The arrays to contract.
with_key (bool, optional) – Whether to yield the output index configuration key along with the chunk.
progbar (bool, optional) – Show progress through the contraction chunks.
- Yields:
chunk (array) – A chunk of the contracted result.
key (dict[str, int]) – The value each sliced output index takes for this chunk.
- contract(arrays, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, backend=None, implementation=None, autojit='auto', progbar=False)[source]¶
Contract
arrays
with this tree. This function takes unsliced arrays and handles the slicing, contractions and gathering. The order of the axes and output is assumed to match that oftree.inputs
andtree.output
.- Parameters:
arrays (sequence of array) – The arrays to contract.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.strip_exponent (bool, optional) – If
True
, eagerly strip the exponent (in log10) from intermediate tensors to control numerical problems from leaving the range of the datatype. This method then returns the scaled ‘mantissa’ output array and the exponent separately.check_zero (bool, optional) – If
True
, whenstrip_exponent=True
, explicitly check for zero-valued intermediates that would otherwise producenan
, instead terminating early if encountered and returning(0.0, 0.0)
.backend (str, optional) – What library to use for
tensordot
,einsum
andtranspose
, it will be automatically inferred from the input arrays if not given.autojit (bool, optional) – Whether to use the ‘autojit’ feature of autoray to compile the contraction expression.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
output (array) – The contracted output, it will be scaled if
strip_exponent==True
.exponent (float) – The exponent of the output in base 10, returned only if
strip_exponent==True
.
See also
- contract_mpi(arrays, comm=None, root=None, **kwargs)[source]¶
Contract the slices of this tree and sum them in parallel - assuming we are already running under MPI.
- Parameters:
arrays (sequence of array) – The input (unsliced arrays)
comm (None or mpi4py communicator) – Defaults to
mpi4py.MPI.COMM_WORLD
if not given.root (None or int, optional) – If
root=None
, anAllreduce
will be performed such that every process has the resulting tensor, else if an integer e.g.root=0
, the result will be exclusively gathered to that process usingReduce
, with every other process returningNone
.kwargs – Supplied to
contract_slice()
.
- benchmark(dtype, max_time=60, min_reps=3, max_reps=100, warmup=True, **contract_opts)[source]¶
Benchmark the contraction of this tree.
- Parameters:
dtype ({"float32", "float64", "complex64", "complex128"}) – The datatype to use.
max_time (float, optional) – The maximum time to spend benchmarking in seconds.
min_reps (int, optional) – The minimum number of repetitions to perform, regardless of time.
max_reps (int, optional) – The maximum number of repetitions to perform, regardless of time.
warmup (bool or int, optional) – Whether to perform a warmup run before the benchmark. If an int, the number of warmup runs to perform.
contract_opts – Supplied to
contract_slice()
.
- Returns:
A dictionary of benchmarking results. The keys are:
- ”time_per_slice”float
The average time to contract a single slice.
- ”est_time_total”float
The estimated total time to contract all slices.
- ”est_gigaflops”float
The estimated gigaflops of the contraction.
- Return type:
dict
See also
- class cotengra.ContractionTreeCompressed(inputs, output, size_dict, track_childless=False, track_flops=False, track_write=False, track_size=False, objective=None)[source]¶
Bases:
ContractionTree
A contraction tree for compressed contractions. Currently the only difference is that this defaults to the ‘surface’ traversal ordering.
- classmethod from_path(inputs, output, size_dict, *, path=None, ssa_path=None, autocomplete='auto', check=False, **kwargs)[source]¶
Create a (completed)
ContractionTreeCompressed
from the usual inputs plus a standard contraction path or ‘ssa_path’ - you need to supply one. This also set the default ‘surface’ traversal ordering to be the initial path.
- total_flops[source]¶
Sum the flops contribution from every node in the tree.
- Parameters:
dtype ({'float', 'complex', None}, optional) – Scale the answer depending on the assumed data type.
- peak_size[source]¶
Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree.
- abstract get_contractor(*_, **__)[source]¶
Get a reusable function which performs the contraction corresponding to this tree, cached.
- Parameters:
tree (ContractionTree) – The contraction tree.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
, the order in which to perform the pairwise contractions given by the tree.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.strip_exponent (bool, optional) – If
True
, the function will strip the exponent from the output array and return it separately.check_zero (bool, optional) – If
True
, whenstrip_exponent=True
, explicitly check for zero-valued intermediates that would otherwise producenan
, instead terminating early if encountered and returning(0.0, 0.0)
.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
”autoray”: dispatch with autoray, using the
tensordot
andeinsum
implementation of the backend.”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet providetensordot
andeinsum
to be used.”cuquantum”: use the cuquantum library to perform the whole contraction (not just individual contractions).
tuple[callable, callable]: manually supply the
tensordot
andeinsum
implementations to use.
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.progbar (bool, optional) – Whether to show progress through the contraction by default.
- Returns:
fn – The contraction function, with signature
fn(*arrays)
.- Return type:
callable
- class cotengra.ContractionTreeMulti(inputs, output, size_dict, sliced_inds, objective, track_cache=False)[source]¶
Bases:
cotengra.core.ContractionTree
Binary tree representing a tensor network contraction.
- Parameters:
inputs (sequence of str) – The list of input tensor’s indices.
output (str) – The output indices.
size_dict (dict[str, int]) – The size of each index.
track_childless (bool, optional) – Whether to dynamically keep track of which nodes are childless. Useful if you are ‘divisively’ building the tree.
track_flops (bool, optional) – Whether to dynamically keep track of the total number of flops. If
False
You can still compute this once the tree is complete.track_write (bool, optional) – Whether to dynamically keep track of the total number of elements written. If
False
You can still compute this once the tree is complete.track_size (bool, optional) – Whether to dynamically keep track of the largest tensor so far. If
False
You can still compute this once the tree is complete.objective (str or Objective, optional) – An default objective function to use for further optimization and scoring, for example reconfiguring or computing the combo cost. If not supplied the default is to create a flops objective when needed.
- children¶
Mapping of each node to two children.
- Type:
dict[node, tuple[node]]
- info¶
Information about the tree nodes. The key is the set of inputs (a set of inputs indices) the node contains. Or in other words, the subgraph of the node. The value is a dictionary to cache information about effective ‘leg’ indices, size, flops of formation etc.
- Type:
dict[node, dict]
- sliced_inds¶
- _track_cache = False¶
- _remove_node(node)[source]¶
Remove
node
from this tree and update the flops and maximum size if tracking them respectively, as well as input pre-processing.
- get_node_is_bright(node)[source]¶
Get whether a node is ‘bright’, i.e. contains a different set of variable indices to either of its children, if a node is not bright then its children never have to be stored in the cache.
- get_node_mult(node)[source]¶
Get the estimated ‘multiplicity’ of a node, i.e. the number of times it will have to be recomputed for different index configurations.
- get_node_cache_mult(node, sliced_ind_ordering)[source]¶
Get the estimated ‘cache multiplicity’ of a node, i.e. the total number of versions with different index configurations that must be stored simultaneously in the cache.
- get_flops(node)[source]¶
The the estimated total cost of computing a node for all index configurations.
- peak_size(log=None)[source]¶
Get the peak concurrent size of tensors needed - this depends on the traversal order, i.e. the exact contraction path, not just the contraction tree.
- class cotengra.HyperGraph(inputs, output=None, size_dict=None)[source]¶
Simple hypergraph builder and writer.
- Parameters:
inputs (sequence of list[str] or dict[int, list[str]]) – The nodes. If given as a dict, the keys will be taken as the node enumeration rather than
range(len(inputs))
.output (str, optional) – Output indices.
size_dict (dict[str, int], optional) – Size of each index.
- nodes¶
Mapping of node to the list of edges incident to it.
- Type:
dict[int, list[str]]
- edges¶
Mapping of hyper edges to list of nodes it is incident to.
- Type:
dict[str, list[int]]
- num_nodes¶
The number of nodes.
- Type:
int
- num_edges¶
The number of hyper-edges.
- Type:
int
- __slots__ = ('inputs', 'output', 'size_dict', 'nodes', 'edges', 'node_counter')¶
- inputs¶
- output = []¶
- size_dict¶
- edges¶
- node_counter¶
- property num_nodes¶
- property num_edges¶
- contract_pair_cost(i, j)[source]¶
Get the cost of contracting nodes
i
andj
- the product of the dimensions of the indices involved.
- neighbor_edges(i)[source]¶
Get the edges incident to all neighbors of node
i
, (including its own edges).
- add_node(inds, node=None)[source]¶
Add a node with
inds
, and optional identifiernode
. The identifier will be generated if not given and returned.
- compress(chi, edges=None)[source]¶
‘Compress’ multiedges, combining their size up to a maximum of
chi
.
- candidate_contraction_size(i, j, chi=None)[source]¶
Get the size of the node created if
i
andj
were contracted, optionally including the effect of first compressing bonds to sizechi
.
- simple_distance(region, p=2)[source]¶
Compute a simple distance metric from nodes in
region
to all others. Unlike graph distance, relative connectedness is taken into account.
- simple_closeness(p=0.75, mu=0.5)[source]¶
Compute a rough hypergraph ‘closeness’.
- Parameters:
p (float, optional) – Once any node has had
H.num_nodes**p
visitors terminate. Set greater than 1.0 for no limit (slower).mu (float, optional) – Let the visitor score decay with this power. The higher this is, the more local connectivity is favored.
- Returns:
scores – The simple hypergraph closenesses - higher being more central.
- Return type:
dict[int, float]
- simple_centrality(r=None, smoothness=2, **closeness_opts)[source]¶
A simple algorithm for large hypergraph centrality. First we find a rough closeness centrality, then relax / smooth this by nodes iteratively radiating their centrality to their neighbors.
- Parameters:
r (None or int, optional) – Number of iterations. Defaults to
max(10, int(self.num_nodes**0.5))
.smoothness (float, optional) – The smoothness. In conjunction with a high value of
r
this will create a smooth gradient from one of the hypergraph to the other.closeness_opts – Supplied to
HyperGraph.simple_closeness
as the starting point.
- Return type:
dict[int, float]
- compute_loops(start=None, max_loop_length=None)[source]¶
Generate all loops up to a certain length in this hypergraph.
- Parameters:
start (sequence of int, optional) – Only generate loops including these nodes, defaults to all.
max_loop_length (None or int, optional) – The maximum loop length to search for. If
None
, then this is set automatically by the length of the first loop found.
- Yields:
loop (tuple[int]) – A set of nodes that form a loop.
- resistance_centrality(rescale=True)[source]¶
Compute the centrality in terms of the total resistance distance to all other nodes.
- cotengra.get_hypergraph(inputs, output=None, size_dict=None, accel=False)[source]¶
Single entry-point for creating a, possibly accelerated, HyperGraph.
- class cotengra.HyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]¶
Bases:
HyperOptimizer
A compressed contraction path optimizer that samples a series of ordered contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'baytune', 'nevergrad', 'chocolate', 'skopt'}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = True¶
- multicontraction = False¶
- class cotengra.HyperMultiOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]¶
Bases:
HyperOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = False¶
- multicontraction = True¶
- class cotengra.HyperOptimizer(methods=None, minimize='flops', max_repeats=128, max_time=None, parallel='auto', simulated_annealing_opts=None, slicing_opts=None, slicing_reconf_opts=None, reconf_opts=None, optlib=None, space=None, score_compression=0.75, on_trial_error='warn', max_training_steps=None, progbar=False, **optlib_opts)[source]¶
Bases:
cotengra.oe.PathOptimizer
A path optimizer that samples a series of contraction trees while optimizing the hyper parameters used to generate them.
- Parameters:
methods (None or sequence[str] or str, optional) – Which method(s) to use from
list_hyper_functions()
.minimize (str, Objective or callable, optional) – How to score each trial, used to train the optimizer and rank the results. If a custom callable, it should take a
trial
dict as its argument and return a single float.max_repeats (int, optional) – The maximum number of trial contraction trees to generate. Default: 128.
max_time (None or float, optional) – The maximum amount of time to run for. Use
None
for no limit. You can also set an estimated execution ‘rate’ here like'rate:1e9'
that will terminate the search when the estimated FLOPs of the best contraction found divided by the rate is greater than the time spent searching, allowing quick termination on easy contractions.parallel ('auto', False, True, int, or distributed.Client) – Whether to parallelize the search.
slicing_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing with the given options, and then update the flops and size of the trial with the sliced versions.
slicing_reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try slicing interleaved with subtree reconfiguation with the given options, and then update the flops and size of the trial with the sliced and reconfigured versions.
reconf_opts (dict, optional) – If supplied, once a trial contraction path is found, try subtree reconfiguation with the given options, and then update the flops and size of the trial with the reconfigured versions.
optlib ({'optuna', 'cmaes', 'nevergrad', 'skopt', ...}, optional) – Which optimizer to sample and train with.
space (dict, optional) – The hyper space to search, see
get_hyper_space
for the default.score_compression (float, optional) – Raise scores to this power in order to compress or accentuate the differences. The lower this is, the more the selector will sample from various optimizers rather than quickly specializing.
on_trial_error ({'warn', 'raise', 'ignore'}, optional) – What to do if a trial fails. If
'warn'
(default), a warning will be printed and the trial will be given a score ofinf
. If'raise'
the error will be raised. If'ignore'
the trial will be given a score ofinf
silently.max_training_steps (int, optional) – The maximum number of trials to train the optimizer with. Setting this can be helpful when the optimizer itself becomes costly to train (e.g. for Gaussian Processes).
progbar (bool, optional) – Show live progress of the best contraction found so far.
optlib_opts – Supplied to the hyper-optimizer library initialization.
- compressed = False¶
- multicontraction = False¶
- max_repeats = 128¶
- _repeats_start = 0¶
- max_time = None¶
- property parallel¶
- method_choices = []¶
- param_choices = []¶
- scores = []¶
- times = []¶
- costs_flops = []¶
- costs_write = []¶
- costs_size = []¶
- property minimize¶
- score_compression = 0.75¶
- on_trial_error = 'warn'¶
- best_score¶
- max_training_steps = None¶
- best¶
- trials_since_best = 0¶
- simulated_annealing_opts = None¶
- slicing_opts = None¶
- reconf_opts = None¶
- slicing_reconf_opts = None¶
- progbar = False¶
- _optimizer¶
- property tree¶
- property path¶
- search(inputs, output, size_dict)[source]¶
Run this optimizer and return the
ContractionTree
for the best path it finds.
- __call__(inputs, output, size_dict, memory_limit=None)[source]¶
opt_einsum
interface, returns directpath
.
- class cotengra.ReusableHyperCompressedOptimizer(chi=None, methods=('greedy-compressed', 'greedy-span', 'kahypar-agglom'), minimize='peak-compressed', **kwargs)[source]¶
Bases:
ReusableHyperOptimizer
Like
HyperCompressedOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths found.- Parameters:
chi (None or int, optional) – The maximum bond dimension to compress to. If
None
then use the square of the largest existing dimension. Ifminimize
is specified as a full scoring function, this is ignored.directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperCompressedOptimizer
.
- set_surface_order = True¶
- class cotengra.ReusableHyperOptimizer(*, directory=None, overwrite=False, hash_method='a', cache_only=False, **opt_kwargs)[source]¶
Bases:
ReusableOptmizer
Like
HyperOptimizer
but it will re-instantiate the optimizer whenever a new contraction is detected, and also cache the paths (and sliced indices) found.- Parameters:
directory (None, True, or str, optional) – If specified use this directory as a persistent cache. If
True
auto generate a directory in the current working directory based on the options which are most likely to affect the path (see ReusableHyperOptimizer.get_path_relevant_opts).overwrite (bool, optional) – If
True
, the optimizer will always run, overwriting old results in the cache. This can be used to update paths without deleting the whole cache.set_surface_order (bool, optional) – If
True
, when reloading a path to turn into aContractionTree
, the ‘surface order’ of the path (used for compressed paths), will be set manually to the order the disk path is.hash_method ({'a', 'b', ...}, optional) – The method used to hash the contraction tree. The default,
'a'
, is faster hashwise but doesn’t recognize when indices are permuted.cache_only (bool, optional) – If
True
, the optimizer will only use the cache, and will raiseKeyError
if a contraction is not found.opt_kwargs – Supplied to
HyperOptimizer
.
- set_surface_order = False¶
- get_path_relevant_opts()[source]¶
Get a frozenset of the options that are most likely to affect the path. These are the options that we use when the directory name is not manually specified.
- property minimize¶
- update_from_tree(tree, overwrite='improved')[source]¶
Explicitly add the contraction that
tree
represents into the cache. For example, if you have manually improved it via reconfing. Ifoverwrite=False
and the contracton is present already then do nothing. Ifoverwrite='improved'
then only overwrite if the new path is better. Ifoverwrite=True
then always overwrite.- Parameters:
tree (ContractionTree) – The tree to add to the cache.
overwrite (bool or "improved", optional) – If
True
always overwrite, ifFalse
only overwrite if the contraction is missing, if'improved'
only overwrite if the new path is better (the default). Note that the comparison of scores is based on default objective of the tree.
- cotengra.hash_contraction(inputs, output, size_dict, method='a')[source]¶
Compute a hash for a particular contraction geometry.
- cotengra.list_hyper_functions()[source]¶
Return a list of currently registered hyper contraction finders.
- cotengra.array_contract(arrays, inputs, output=None, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]¶
Perform the tensor contraction specified by
inputs
,output
andsize_dict
, using strategy given byoptimize
. By default the path finding and expression building is cached, so that if the a matching contraction is performed multiple times the overhead is negated.- Parameters:
arrays (Sequence[array_like]) – The arrays to contract.
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable]) – The output term.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
cache_expression (bool, optional) – If
True
, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.
kwargs – Passed to
array_contract_expression()
.
- Return type:
array_like
See also
- cotengra.array_contract_expression(inputs, output=None, size_dict=None, shapes=None, optimize='auto', constants=None, canonicalize=True, cache=True, **kwargs)[source]¶
Get an callable ‘expression’ that will contract tensors with indices and shapes described by
inputs
andsize_dict
tooutput
. Theoptimize
kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true ifoptimize
is an optimizer that can directly produceContractionTree
instances (i.e. has a.search()
method).- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable]) – The output term.
size_dict (dict[Hashable, int]) – The size of each index.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
constants (dict[int, array_like], optional) – A mapping of constant input positions to constant arrays. If given, the final expression will take only the remaining non-constant tensors as inputs. Note this is a different format to the
constants
kwarg ofeinsum_expression()
since it also provides the constant arrays.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
- ”autoray”: dispatch with autoray, using the
tensordot
and einsum
implementation of the backend.
- ”autoray”: dispatch with autoray, using the
- ”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide
tensordot
andeinsum
to be used.
- ”cotengra”: use the
- ”cuquantum”: use the cuquantum library to perform the whole
contraction (not just individual contractions).
- tuple[callable, callable]: manually supply the
tensordot
and einsum
implementations to use.
- tuple[callable, callable]: manually supply the
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.
sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
before constructing the contraction function.cache (bool, optional) – If
True
, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.
- Returns:
expr – A callable, signature
expr(*arrays)
that will contractarrays
with shapesshapes
.- Return type:
callable
See also
- cotengra.array_contract_path(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, cache=True)[source]¶
Find only the contraction path for the specific contraction, with fast dispatch of
optimize
, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer. The raw path is a more compact representation of the core tree structure but contains less information on its own, for example sliced indices are not included.- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable], optional) – The output term.
size_dict (dict[Hashable, int], optional) – The size of each index, if given,
shapes
is ignored.shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if
size_dict
not supplied.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.cache (bool, optional) – If
True
, cache the path for the contraction, so that if the same pathfinding is performed multiple times the overhead is negated. Only for hashableoptimize
objects.
- Returns:
path – The contraction path, whose interpretation is thus: the input tensors are assumed to be stored in a list, i.e. indexed by
range(N)
. Each contraction in the path is a set of indices, the tensors at these locations should be popped from the list and then the result of the contraction appended.- Return type:
tuple[tuple[int]]
- cotengra.array_contract_tree(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, sort_contraction_indices=False)[source]¶
Get the
ContractionTree
for the tensor contraction specified byinputs
,output
andsize_dict
, with optimization strategy given byoptimize
. The tree can be used to inspect and also perform the contraction.- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable], optional) – The output term.
size_dict (dict[Hashable, int], optional) – The size of each index, if given,
shapes
is ignored.shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if
size_dict
not supplied.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
.
- Return type:
See also
- cotengra.einsum(*args, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]¶
Perform an einsum contraction, using cotengra, using strategy given by
optimize
. By default the path finding and expression building is cached, so that if a matching contraction is performed multiple times the overhead is negated.- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
.arrays (Sequence[array_like]) – The arrays to contract.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
cache_expression (bool, optional) – If
True
, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.
kwargs – Passed to
array_contract_expression()
.
- Return type:
array_like
See also
- cotengra.einsum_expression(*args, optimize='auto', constants=None, cache=True, **kwargs)[source]¶
Get an callable ‘expression’ that will contract tensors with shapes
shapes
according to equationeq
. Theoptimize
kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true ifoptimize
is an optimizer that can directly produceContractionTree
instances (i.e. has a.search()
method).- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
. The output will be automatically computed if not supplied, but Ellipses (’…’) are not supported.shapes (Sequence[tuple[int]]) – The shapes of the tensors to contract, or the constant tensor itself if marked as constant in
constants
.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
constants (Sequence of int, optional) – The indices of tensors to treat as constant, the final expression will take the remaining non-constant tensors as inputs. Note this is a different format to the
constants
kwarg ofarray_contract_expression()
since the actual constant arrays are inserted intoshapes
.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
- ”autoray”: dispatch with autoray, using the
tensordot
and einsum
implementation of the backend.
- ”autoray”: dispatch with autoray, using the
- ”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide
tensordot
andeinsum
to be used.
- ”cotengra”: use the
- ”cuquantum”: use the cuquantum library to perform the whole
contraction (not just individual contractions).
- tuple[callable, callable]: manually supply the
tensordot
and einsum
implementations to use.
- tuple[callable, callable]: manually supply the
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.
sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
before constructing the contraction function.cache (bool, optional) – If
True
, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.
- Returns:
expr – A callable, signature
expr(*arrays)
that will contractarrays
with shapes matchingshapes
.- Return type:
callable
See also
- cotengra.einsum_tree(*args, optimize='auto', canonicalize=False, sort_contraction_indices=False)[source]¶
Get the ContractionTree for the einsum equation
eq
and optimization strategyoptimize
. The tree can be used to inspect and also perform the contraction.- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
.shapes (Sequence[tuple[int]]) – The shape of each input array.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
.
- Return type:
See also
- cotengra.ncon(arrays, indices, **kwargs)[source]¶
Perform a contraction specified by the ncon style indices, using cotengra. This is very similar to array_contract, but the indices should be intergers, with negative integers specifying outputs. The output order is determined by the order of the negative integers, like
[-1, -2, -3, ...]
.See paper https://arxiv.org/abs/1402.0939 and python implementation https://github.com/mhauru/ncon.
The contraction order is found as with other cotengra routines, and does not default to contracting in the order given by the indices.
- Parameters:
arrays (Sequence[array_like]) – The arrays to contract.
indices (Sequence[Sequence[int]]) – The indices to contract, with negative integers specifying outputs.
kwargs (dict) – Supplied to array_contract.
- Return type:
array_like
- cotengra.register_preset(preset, optimizer, optimizer_tree=None, register_opt_einsum='auto', compressed=False)[source]¶
Register a preset optimizer.
- Parameters:
preset (str) – The name of the preset.
optimizer (callable) – The optimizer function that returns a path.
optimizer_tree (callable, optional) – The optimizer function that returns a tree.
register_opt_einsum (bool or str, optional) – If
True
or'auto'
, register the preset with opt_einsum.compressed (bool, optional) – If
True
, the preset presents a compressed contraction optimizer.
- class cotengra.GreedyOptimizer(costmod=1.0, temperature=0.0, simplify=True, accel='auto')[source]¶
Bases:
cotengra.oe.PathOptimizer
Class interface to the greedy optimizer which can be instantiated with default options.
- __slots__ = ('costmod', 'temperature', 'simplify', '_optimize_fn')¶
- costmod = 1.0¶
- temperature = 0.0¶
- simplify = True¶
- _optimize_fn¶
- class cotengra.OptimalOptimizer(minimize='flops', cost_cap=2, search_outer=False, simplify=True, accel='auto')[source]¶
Bases:
cotengra.oe.PathOptimizer
Class interface to the optimal optimizer which can be instantiated with default options.
- __slots__ = ('minimize', 'cost_cap', 'search_outer', 'simplify', '_optimize_fn')¶
- minimize = 'flops'¶
- cost_cap = 2¶
- search_outer = False¶
- simplify = True¶
- _optimize_fn¶
- class cotengra.RandomGreedyOptimizer(max_repeats=32, costmod=(0.1, 4.0), temperature=(0.001, 1.0), seed=None, simplify=True, accel='auto', parallel='auto')[source]¶
Bases:
cotengra.oe.PathOptimizer
Lightweight random greedy optimizer, that eschews hyper parameter tuning and contraction tree construction. This is a stateful optimizer that should not be re-used on different contractions.
- Parameters:
max_repeats (int, optional) – The number of random greedy trials to perform.
costmod ((float, float), optional) –
When assessing local greedy scores how much to weight the size of the tensors removed compared to the size of the tensor added:
score = size_ab / costmod - (size_a + size_b) * costmod
It is sampled uniformly from the given range.
temperature ((float, float), optional) –
When asessing local greedy scores, how much to randomly perturb the score. This is implemented as:
score -> sign(score) * log(|score|) - temperature * gumbel()
which implements boltzmann sampling. It is sampled log-uniformly from the given range.
seed (int, optional) – The seed for the random number generator. Note that deterministic behavior is only guaranteed within the python or rust backend (the accel parameter) and parallel settings.
simplify (bool, optional) –
Whether to perform simplifications before optimizing. These are:
ignore any indices that appear in all terms
combine any repeated indices within a single term
reduce any non-output indices that only appear on a single term
combine any scalar terms
combine any tensors with matching indices (hadamard products)
Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form.
accel (bool or str, optional) – Whether to use the accelerated cotengrust backend. If “auto” the backend is used if available.
parallel (bool or str, optional) – Whether to use parallel processing. If “auto” the default is to use threads if the accelerated backend is not used, and processes if it is.
- best_ssa_path¶
The best contraction path found so far.
- Type:
list[list[int]]
- best_flops¶
The flops (/ contraction cost / number of multiplications) of the best contraction path found so far.
- Type:
float
- max_repeats = 32¶
- simplify = True¶
- rng¶
- best_ssa_path = None¶
- best_flops¶
- _optimize_fn¶
- _pool = None¶
- class cotengra.FlowCutterOptimizer(max_time=10, seed=None, executable='flow_cutter_pace17')[source]¶
Bases:
cotengra.oe.PathOptimizer
- max_time = 10¶
- rng¶
- executable = 'flow_cutter_pace17'¶
- cotengra.optimize_flowcutter(inputs, output, size_dict, memory_limit=None, max_time=10, seed=None)[source]¶
- class cotengra.QuickBBOptimizer(max_time=10, executable='quickbb_64', seed=None)[source]¶
Bases:
cotengra.oe.PathOptimizer
- max_time = 10¶
- executable = 'quickbb_64'¶
- cotengra.optimize_quickbb(inputs, output, size_dict, memory_limit=None, max_time=60, seed=None)[source]¶
- class cotengra.RandomOptimizer(seed=None)[source]¶
Bases:
cotengra.oe.PathOptimizer
A fully random pathfinder, that randomly selects pairs of tensors to contract (even if they are not connected). This is useful for testing purposes, and as a baseline for comparison with other pathfinders.
- Parameters:
seed (None, int or np.random.Generator, optional) – Random seed. If None, a random seed is selected. Default is None.
- rng¶
- cotengra.plot_contractions(tree, order=None, color_size=(0.6, 0.4, 0.7), color_cost=(0.3, 0.7, 0.5), figsize=(8, 3))[source]¶
- cotengra.plot_contractions_alt(tree, x='peak-size', y='flops', color='stage', size='scaling', width=400, height=400, point_opacity=0.8, color_scheme='lightmulti', x_scale='log', y_scale='log', color_scale='log', size_scale='linear')[source]¶
- cotengra.plot_scatter(self, x='size', y='flops', cumulative_time=False, plot_best=False, figsize=(5, 5))[source]¶
- cotengra.plot_scatter_alt(self, x='size', y='flops', color='run:Q', color_scheme='purplebluegreen', shape='method:N', width=400, height=400)[source]¶
Plot the trials total flops vs max size.
- cotengra.plot_slicings(slice_finder, color_scheme='RdYlBu_r', relative_flops=False, figsize=(6, 3), point_opacity=0.8)[source]¶
- cotengra.plot_slicings_alt(slice_finder, color_scheme='redyellowblue', relative_flops=False)[source]¶
- cotengra.plot_tree(tree, layout='ring', hypergraph_layout='auto', hypergraph_layout_opts=None, k=0.01, iterations=500, span=None, order=None, order_y_pow=1.0, edge_scale=1.0, node_scale=1.0, highlight=(), edge_color='size', edge_colormap='GnBu', edge_max_width=None, node_colormap='YlOrRd', node_color='flops', node_max_size=None, figsize=(5, 5), raw_edge_color=None, raw_edge_alpha=None, tree_root_height=True, tree_alpha=0.8, colorbars=True, plot_raw_graph=True, plot_leaf_labels=False, ax=None)[source]¶
Plot a contraction tree using matplotlib.
- cotengra.plot_trials(self, *, x='trial', y='score', figsize=(8, 3), cumulative_time=True, plot_best=True, **kwargs)[source]¶
- cotengra.plot_trials_alt(self, y=None, width=800, height=300)[source]¶
Plot the trials interactively using altair.
- class cotengra.AutoHQOptimizer(**kwargs)[source]¶
Bases:
AutoOptimizer
An optimizer that automatically chooses between optimal and hyper-optimization, designed for everyday use on harder contractions or those that will be repeated many times, and thus warrant a more extensive search.
- class cotengra.AutoOptimizer(optimal_cutoff=250, minimize='combo', cache=True, **hyperoptimizer_kwargs)[source]¶
Bases:
cotengra.oe.PathOptimizer
An optimizer that automatically chooses between optimal and hyper-optimization, designed for everyday use.
- minimize = 'combo'¶
- optimal_cutoff = 250¶
- _optimize_optimal_fn¶
- kwargs¶
- _hyperoptimizers_by_thread¶
- cotengra.auto_hq_optimize¶
- cotengra.auto_optimize¶
- cotengra.greedy_optimize¶
- cotengra.optimal_optimize¶
- cotengra.optimal_outer_optimize¶
- class cotengra.SliceFinder(tree_or_info, target_size=None, target_overhead=None, target_slices=None, temperature=0.01, minimize='flops', allow_outer=True, seed=None)[source]¶
An object to help find the best indices to slice over in order to reduce the memory footprint of a contraction as much as possible whilst introducing as little extra overhead. It searches for and stores
ContractionCosts
.- Parameters:
tree_or_info (ContractionTree or opt_einsum.PathInfo) – Object describing the target full contraction to slice.
target_size (int, optional) – The target number of entries in the largest tensor of the sliced contraction. The search algorithm will terminate after this is reached.
target_slices (int, optional) – The target or minimum number of ‘slices’ to consider - individual contractions after slicing indices. The search algorithm will terminate after this is breached. This is on top of the current number of slices.
target_overhead (float, optional) – The target increase in total number of floating point operations. For example, a value of
2.0
will terminate the search just before the cost of computing all the slices individually breaches twice that of computing the original contraction all at once.temperature (float, optional) – When sampling combinations of indices, how far to randomly stray from what looks like the best (local) choice.
- info¶
- costs¶
- temperature = 0.01¶
- rng¶
- target_size = None¶
- target_overhead = None¶
- target_slices = None¶
- minimize¶
- best(k=None, target_size=None, target_overhead=None, target_slices=None)[source]¶
Return the best contraction slicing, subject to target filters.
- trial(target_size=None, target_overhead=None, target_slices=None, temperature=None)[source]¶
A single slicing attempt, greedily select indices from the popular pool, subject to the score function, terminating when any of the target criteria are met.
- cotengra.get_symbol(i)[source]¶
Get the symbol corresponding to int
i
- runs through the usual 52 letters before resorting to unicode characters, starting atchr(192)
and skipping surrogates.Examples
get_symbol(2) #> ‘c’
get_symbol(200) #> ‘Ŕ’
get_symbol(20000) #> ‘京’
- cotengra.get_symbol_map(inputs)[source]¶
Get a mapping of arbitrary hashable ‘indices’ to single unicode symbols, matching the canonicalization of the expression.
- cotengra.UniformOptimizer¶
Does no gaussian process tuning by default, just randomly samples - requires no optimization library.
- cotengra.QuasiRandOptimizer¶
Does no gaussian process tuning by default, just randomly samples but in a more ‘even’ way than purely random - requires
chocolate
.
- cotengra.contract_expression[source]¶
Alias for
cotengra.einsum_expression()
.
- cotengra.contract[source]¶
Alias for
cotengra.einsum()
.