cotengra.contract¶
Functionality relating to actually contracting.
Attributes¶
Classes¶
Default cotengra network contractor. |
|
Functions¶
|
Context manager for temporarily setting the default implementation. |
Get the input and output indices of an equation, computing the output |
|
|
Cached parsing of a single term einsum equation into the necessary |
|
If there are no contracted indices, then we can directly transpose and |
|
Cached parsing of a two term einsum equation into the necessary |
|
Einsum on a single tensor, via three steps: diagonal selection |
|
|
|
Perform arbitrary single and pairwise einsums using only matmul, |
Generate the indices from [a-z, A-Z, reasonable unicode...]. |
|
|
Parse a tensordot specification into the necessary sequence of arguments |
|
Perform a tensordot using only matmul, transpose, reshape. The |
|
Extract just the information needed to perform the contraction. |
|
Get a reusable function which performs the contraction corresponding |
Module Contents¶
- cotengra.contract.DEFAULT_IMPLEMENTATION = 'auto'¶
- cotengra.contract.default_implementation(impl)[source]¶
Context manager for temporarily setting the default implementation.
- cotengra.contract._sanitize_equation(eq)[source]¶
Get the input and output indices of an equation, computing the output implicitly as the sorted sequence of every index that appears exactly once if it is not provided.
- cotengra.contract._parse_einsum_single(eq, shape)[source]¶
Cached parsing of a single term einsum equation into the necessary sequence of arguments for axes diagonals, sums, and transposes.
- cotengra.contract._parse_eq_to_pure_multiplication(a_term, shape_a, b_term, shape_b, out)[source]¶
If there are no contracted indices, then we can directly transpose and insert singleton dimensions into
a
andb
such that (broadcast) elementwise multiplication performs the einsum.No need to cache this as it is within the cached
_parse_eq_to_batch_matmul
.
- cotengra.contract._parse_eq_to_batch_matmul(eq, shape_a, shape_b)[source]¶
Cached parsing of a two term einsum equation into the necessary sequence of arguments for contracttion via batched matrix multiplication. The steps we need to specify are:
Remove repeated and trivial indices from the left and right terms, and transpose them, done as a single einsum.
Fuse the remaining indices so we have two 3D tensors.
Perform the batched matrix multiplication.
Unfuse the output to get the desired final index order.
- cotengra.contract._einsum_single(eq, x, backend=None)[source]¶
Einsum on a single tensor, via three steps: diagonal selection (via advanced indexing), axes summations, transposition. The logic for each is cached based on the equation and array shape, and each step is only performed if necessary.
- cotengra.contract._do_contraction_via_bmm(a, b, eq_a, eq_b, new_shape_a, new_shape_b, new_shape_ab, perm_ab, pure_multiplication, backend)[source]¶
- cotengra.contract.einsum(eq, a, b=None, *, backend=None)[source]¶
Perform arbitrary single and pairwise einsums using only matmul, transpose, reshape and sum. The logic for each is cached based on the equation and array shape, and each step is only performed if necessary.
- Parameters:
eq (str) – The einsum equation.
a (array_like) – The first array to contract.
b (array_like, optional) – The second array to contract.
backend (str, optional) – The backend to use for array operations. If
None
, dispatch automatically based ona
andb
.
- Return type:
array_like
- cotengra.contract.gen_nice_inds()[source]¶
Generate the indices from [a-z, A-Z, reasonable unicode…].
- cotengra.contract._parse_tensordot_axes_to_matmul(axes, shape_a, shape_b)[source]¶
Parse a tensordot specification into the necessary sequence of arguments for contracttion via matrix multiplication. This just converts
axes
into aneinsum
eq string then calls_parse_eq_to_batch_matmul
.
- cotengra.contract.tensordot(a, b, axes=2, *, backend=None)[source]¶
Perform a tensordot using only matmul, transpose, reshape. The logic for each is cached based on the equation and array shape, and each step is only performed if necessary.
- Parameters:
a (array_like) – The arrays to contract.
b (array_like) – The arrays to contract.
axes (int or tuple of (sequence[int], sequence[int])) – The number of axes to contract, or the axes to contract. If an int, the last
axes
axes ofa
and the firstaxes
axes ofb
are contracted. If a tuple, the axes to contract fora
andb
respectively.backend (str or None, optional) – The backend to use for array operations. If
None
, dispatch automatically based ona
andb
.
- Return type:
array_like
- cotengra.contract.extract_contractions(tree, order=None, prefer_einsum=False)[source]¶
Extract just the information needed to perform the contraction.
- Parameters:
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.
- Returns:
contractions – A tuple of tuples, each containing the information needed to perform a pairwise contraction. Each tuple contains:
p
: the parent node,l
: the left child node,r
: the right child node,tdot
: whether to usetensordot
oreinsum
,arg
: the argument to pass totensordot
oreinsum
i.e.
axes
oreq
,
perm
: the permutation required after the contraction, ifany (only applies to tensordot).
If both
l
andr
areNone
, the the operation is a single term simplification performed witheinsum
.- Return type:
tuple
- class cotengra.contract.Contractor(contractions, strip_exponent=False, check_zero=False, implementation='auto', backend=None, progbar=False)[source]¶
Default cotengra network contractor.
- Parameters:
contractions (tuple[tuple]) –
The sequence of contractions to perform. Each contraction should be a tuple containing:
p
: the parent node,l
: the left child node,r
: the right child node,tdot
: whether to usetensordot
oreinsum
,arg
: the argument to pass totensordot
oreinsum
i.e.
axes
oreq
,
perm
: the permutation required after the contraction, ifany (only applies to tensordot).
e.g. built by calling
extract_contractions(tree)
.strip_exponent (bool, optional) – If
True
, eagerly strip the exponent (in log10) from intermediate tensors to control numerical problems from leaving the range of the datatype. This method then returns the scaled ‘mantissa’ output array and the exponent separately.check_zero (bool, optional) – If
True
, whenstrip_exponent=True
, explicitly check for zero-valued intermediates that would otherwise producenan
, instead terminating early if encounteredand returning(0.0, 0.0)
.backend (str, optional) – What library to use for
tensordot
,einsum
andtranspose
, it will be automatically inferred from the input arrays if not given.progbar (bool, optional) – Whether to show a progress bar.
- __slots__ = ('contractions', 'strip_exponent', 'check_zero', 'implementation', 'backend', 'progbar', '__weakref__')¶
- contractions¶
- strip_exponent = False¶
- check_zero = False¶
- implementation = 'auto'¶
- backend = None¶
- progbar = False¶
- __call__(*arrays, **kwargs)[source]¶
Contract
arrays
using operations listed incontractions
.- Parameters:
arrays (sequence of array-like) – The arrays to contract.
kwargs (dict) – Override the default settings for this contraction only.
- Returns:
output (array) – The contracted output, it will be scaled if
strip_exponent==True
.exponent (float) – The exponent of the output in base 10, returned only if
strip_exponent==True
.
- class cotengra.contract.CuQuantumContractor(tree, handle_slicing=False, autotune=False, **kwargs)[source]¶
- kwargs¶
- autotune = 3¶
- handle = None¶
- network = None¶
- cotengra.contract.make_contractor(tree, order=None, prefer_einsum=False, strip_exponent=False, check_zero=False, implementation=None, autojit=False, progbar=False)[source]¶
Get a reusable function which performs the contraction corresponding to
tree
. The various options provide defaults that can also be overrode when calling the standard contractor.- Parameters:
tree (ContractionTree) – The contraction tree.
order (str or callable, optional) – Supplied to
ContractionTree.traverse()
, the order in which to perform the pairwise contractions given by the tree.prefer_einsum (bool, optional) – Prefer to use
einsum
for pairwise contractions, even iftensordot
can perform the contraction.strip_exponent (bool, optional) – If
True
, the function will strip the exponent from the output array and return it separately.check_zero (bool, optional) – If
True
, whenstrip_exponent=True
, explicitly check for zero-valued intermediates that would otherwise producenan
, instead terminating early if encountered and returning(0.0, 0.0)
.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are
”auto”: let cotengra choose
”autoray”: dispatch with autoray, using the
tensordot
andeinsum
implementation of the backend”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet providetensordot
andeinsum
to be used.”cuquantum”: use the cuquantum library to perform the whole contraction (not just individual contractions).
tuple[callable, callable]: manually supply the
tensordot
andeinsum
implementations to use.
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.progbar (bool, optional) – Whether to show progress through the contraction by default.
- Returns:
fn – The contraction function, with signature
fn(*arrays)
.- Return type:
callable