cotengra.interface¶
High-level interface functions to cotengra.
Attributes¶
Classes¶
Wrapper to make non-variadic (i.e. with signature |
|
Wrapper that applies one function to the input arrays and another to |
|
Wrapper to make any autoray written function take a |
Functions¶
|
Register a preset optimizer. |
|
|
|
Check if the type of optimize supplied can be hashed. |
|
|
|
Transform an optimize object into a hashable form. |
|
Compute a hash key for the specified contraction. |
|
Parse a contraction definition, optionally canonicalizing the indices |
|
|
|
|
|
|
|
|
|
Directly find a contraction path for a given set of inputs and output. |
|
Find only the contraction path for the specific contraction, with fast |
|
|
|
|
|
|
|
|
|
|
|
Find a contraction tree for the specific contraction, with fast dispatch |
|
Get the |
|
|
|
|
|
Get an callable 'expression' that will contract tensors with indices and |
|
Perform the tensor contraction specified by |
|
Get the ContractionTree for the einsum equation |
|
Get an callable 'expression' that will contract tensors with shapes |
|
Perform an einsum contraction, using cotengra, using strategy given by |
Module Contents¶
- cotengra.interface._PRESETS¶
- cotengra.interface._COMPRESSED_PRESETS¶
- cotengra.interface.register_preset(preset, optimizer, register_opt_einsum='auto', compressed=False)[source]¶
Register a preset optimizer.
- cotengra.interface.can_hash_optimize(cls)[source]¶
Check if the type of optimize supplied can be hashed.
- cotengra.interface._HASH_OPTIMIZE_PREPARERS¶
- cotengra.interface.hash_prepare_optimize(optimize)[source]¶
Transform an optimize object into a hashable form.
- cotengra.interface.hash_contraction(inputs, output, size_dict, optimize, **kwargs)[source]¶
Compute a hash key for the specified contraction.
- cotengra.interface.normalize_input(inputs, output=None, size_dict=None, shapes=None, canonicalize=True)[source]¶
Parse a contraction definition, optionally canonicalizing the indices (mapping them into symbols beginning with
'a', 'b', 'c', ...
), computing the output if not specified, and computing thesize_dict
from
- cotengra.interface._find_path_handlers¶
- cotengra.interface.find_path(inputs, output, size_dict, optimize='auto', **kwargs)[source]¶
Directly find a contraction path for a given set of inputs and output.
- Parameters:
inputs (Sequence[Sequence[str]]) – The inputs terms.
output (Sequence[str]) – The output term.
size_dict (dict[str, int]) – The size of each index.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
- Returns:
path – The contraction path.
- Return type:
tuple[tuple[int]]
- cotengra.interface._PATH_CACHE¶
- cotengra.interface.array_contract_path(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, cache=True)[source]¶
Find only the contraction path for the specific contraction, with fast dispatch of
optimize
, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer. The raw path is a more compact representation of the core tree structure but contains less information on its own, for example sliced indices are not included.- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable], optional) – The output term.
size_dict (dict[Hashable, int], optional) – The size of each index, if given,
shapes
is ignored.shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if
size_dict
not supplied.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.cache (bool, optional) – If
True
, cache the path for the contraction, so that if the same pathfinding is performed multiple times the overhead is negated. Only for hashableoptimize
objects.
- Returns:
path – The contraction path, whose interpretation is thus: the input tensors are assumed to be stored in a list, i.e. indexed by
range(N)
. Each contraction in the path is a set of indices, the tensors at these locations should be popped from the list and then the result of the contraction appended.- Return type:
tuple[tuple[int]]
- cotengra.interface._find_tree_optimizer_search(inputs, output, size_dict, optimize, **kwargs)[source]¶
- cotengra.interface._find_tree_optimizer_basic(inputs, output, size_dict, optimize, **kwargs)[source]¶
- cotengra.interface._find_tree_handlers¶
- cotengra.interface.find_tree(inputs, output, size_dict, optimize='auto', **kwargs)[source]¶
Find a contraction tree for the specific contraction, with fast dispatch of
optimize
, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer.- Parameters:
inputs (Sequence[Sequence[str]]) – The inputs terms.
output (Sequence[str]) – The output term.
size_dict (dict[str, int]) – The size of each index.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
- Returns:
tree
- Return type:
- cotengra.interface.array_contract_tree(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, sort_contraction_indices=False)[source]¶
Get the
ContractionTree
for the tensor contraction specified byinputs
,output
andsize_dict
, with optimization strategy given byoptimize
. The tree can be used to inspect and also perform the contraction.- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable], optional) – The output term.
size_dict (dict[Hashable, int], optional) – The size of each index, if given,
shapes
is ignored.shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if
size_dict
not supplied.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
.
- Return type:
See also
- class cotengra.interface.Variadic(fn, **kwargs)[source]¶
Wrapper to make non-variadic (i.e. with signature
f(arrays)
) function variadic (i.e. with signaturef(*arrays)
).- __slots__ = ('fn', 'kwargs')¶
- fn¶
- kwargs¶
- class cotengra.interface.Via(fn, convert_in, convert_out)[source]¶
Wrapper that applies one function to the input arrays and another to the output array. For example, moving the tensors from CPU to GPU and back.
- __slots__ = ('fn', 'convert_in', 'convert_out')¶
- fn¶
- convert_in¶
- convert_out¶
- class cotengra.interface.WithBackend(fn)[source]¶
Wrapper to make any autoray written function take a
backend
kwarg, by simply using autoray.backend_like.- __slots__ = ('fn',)¶
- fn¶
- cotengra.interface._array_contract_expression_with_constants(inputs, output, size_dict, constants, optimize='auto', implementation=None, prefer_einsum=False, autojit=False, via=None, sort_contraction_indices=False, cache=True)[source]¶
- cotengra.interface._build_expression(inputs, output=None, size_dict=None, optimize='auto', implementation=None, prefer_einsum=False, autojit=False, via=None, sort_contraction_indices=False)[source]¶
- cotengra.interface._CONTRACT_EXPR_CACHE¶
- cotengra.interface.array_contract_expression(inputs, output=None, size_dict=None, shapes=None, optimize='auto', constants=None, canonicalize=True, cache=True, **kwargs)[source]¶
Get an callable ‘expression’ that will contract tensors with indices and shapes described by
inputs
andsize_dict
tooutput
. Theoptimize
kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true ifoptimize
is an optimizer that can directly produceContractionTree
instances (i.e. has a.search()
method).- Parameters:
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable]) – The output term.
size_dict (dict[Hashable, int]) – The size of each index.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
constants (dict[int, array_like], optional) – A mapping of constant input positions to constant arrays. If given, the final expression will take only the remaining non-constant tensors as inputs. Note this is a different format to the
constants
kwarg ofeinsum_expression()
since it also provides the constant arrays.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
- ”autoray”: dispatch with autoray, using the
tensordot
and einsum
implementation of the backend.
- ”autoray”: dispatch with autoray, using the
- ”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide
tensordot
andeinsum
to be used.
- ”cotengra”: use the
- ”cuquantum”: use the cuquantum library to perform the whole
contraction (not just individual contractions).
- tuple[callable, callable]: manually supply the
tensordot
and einsum
implementations to use.
- tuple[callable, callable]: manually supply the
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.
sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
before constructing the contraction function.cache (bool, optional) – If
True
, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.
- Returns:
expr – A callable, signature
expr(*arrays)
that will contractarrays
with shapesshapes
.- Return type:
callable
See also
- cotengra.interface.array_contract(arrays, inputs, output=None, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]¶
Perform the tensor contraction specified by
inputs
,output
andsize_dict
, using strategy given byoptimize
. By default the path finding and expression building is cached, so that if the a matching contraction is performed multiple times the overhead is negated.- Parameters:
arrays (Sequence[array_like]) – The arrays to contract.
inputs (Sequence[Sequence[Hashable]]) – The inputs terms.
output (Sequence[Hashable]) – The output term.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
cache_expression (bool, optional) – If
True
, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.
kwargs – Passed to
array_contract_expression()
.
- Return type:
array_like
See also
- cotengra.interface.einsum_tree(*args, optimize='auto', canonicalize=False, sort_contraction_indices=False)[source]¶
Get the ContractionTree for the einsum equation
eq
and optimization strategyoptimize
. The tree can be used to inspect and also perform the contraction.- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
.shapes (Sequence[tuple[int]]) – The shape of each input array.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
canonicalize (bool, optional) – If
True
, canonicalize the inputs and output so that the indices are relabelled'a', 'b', 'c', ...
, etc. in the order they appear.sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
.
- Return type:
See also
- cotengra.interface.einsum_expression(*args, optimize='auto', constants=None, cache=True, **kwargs)[source]¶
Get an callable ‘expression’ that will contract tensors with shapes
shapes
according to equationeq
. Theoptimize
kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true ifoptimize
is an optimizer that can directly produceContractionTree
instances (i.e. has a.search()
method).- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
. The output will be automatically computed if not supplied, but Ellipses (’…’) are not supported.shapes (Sequence[tuple[int]]) – The shapes of the tensors to contract, or the constant tensor itself if marked as constant in
constants
.optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
constants (Sequence of int, optional) – The indices of tensors to treat as constant, the final expression will take the remaining non-constant tensors as inputs. Note this is a different format to the
constants
kwarg ofarray_contract_expression()
since the actual constant arrays are inserted intoshapes
.implementation (str or tuple[callable, callable], optional) –
What library to use to actually perform the contractions. Options are:
None: let cotengra choose.
- ”autoray”: dispatch with autoray, using the
tensordot
and einsum
implementation of the backend.
- ”autoray”: dispatch with autoray, using the
- ”cotengra”: use the
tensordot
andeinsum
implementation of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide
tensordot
andeinsum
to be used.
- ”cotengra”: use the
- ”cuquantum”: use the cuquantum library to perform the whole
contraction (not just individual contractions).
- tuple[callable, callable]: manually supply the
tensordot
and einsum
implementations to use.
- tuple[callable, callable]: manually supply the
autojit (bool, optional) – If
True
, useautoray.autojit()
to compile the contraction function.via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.
sort_contraction_indices (bool, optional) – If
True
, calltree.sort_contraction_indices()
before constructing the contraction function.cache (bool, optional) – If
True
, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.
- Returns:
expr – A callable, signature
expr(*arrays)
that will contractarrays
with shapes matchingshapes
.- Return type:
callable
See also
- cotengra.interface.einsum(*args, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]¶
Perform an einsum contraction, using cotengra, using strategy given by
optimize
. By default the path finding and expression building is cached, so that if a matching contraction is performed multiple times the overhead is negated.- Parameters:
eq (str) – The equation to use for contraction, for example
'ab,bc->ac'
.arrays (Sequence[array_like]) – The arrays to contract.
optimize (str, path_like, PathOptimizer, or ContractionTree) –
The optimization strategy to use. This can be:
A string preset, e.g.
'auto'
,'greedy'
,'optimal'
.A
PathOptimizer
instance.An explicit path, e.g.
[(0, 1), (2, 3), ...]
.An explicit
ContractionTree
instance.
If the optimizer provides sliced indices they will be used.
cache_expression (bool, optional) – If
True
, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashableoptimize
objects.backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.
kwargs – Passed to
array_contract_expression()
.
- Return type:
array_like
See also