cotengra.interface

High-level interface functions to cotengra.

Module Contents

Classes

Variadic

Wrapper to make non-variadic (i.e. with signature f(arrays))

Via

Wrapper that applies one function to the input arrays and another to

WithBackend

Wrapper to make any autoray written function take a backend kwarg,

Functions

register_preset(preset, optimizer[, ...])

Register a preset optimizer.

preset_to_optimizer(preset)

can_hash_optimize(cls)

Check if the type of optimize supplied can be hashed.

identity(x)

hash_prepare_optimize(optimize)

Transform an optimize object into a hashable form.

hash_contraction(inputs, output, size_dict, optimize, ...)

Compute a hash key for the specified contraction.

normalize_input(inputs[, output, size_dict, shapes, ...])

Parse a contraction definition, optionally canonicalizing the indices

_find_path_explicit_path(inputs, output, size_dict, ...)

_find_path_optimizer(inputs, output, size_dict, ...)

_find_path_preset(inputs, output, size_dict, optimize, ...)

_find_path_tree(inputs, output, size_dict, optimize, ...)

find_path(inputs, output, size_dict[, optimize])

Directly find a contraction path for a given set of inputs and output.

array_contract_path(inputs[, output, size_dict, ...])

Find only the contraction path for the specific contraction, with fast

_find_tree_explicit(inputs, output, size_dict, optimize)

_find_tree_optimizer_search(inputs, output, size_dict, ...)

_find_tree_optimizer_basic(inputs, output, size_dict, ...)

_find_tree_preset(inputs, output, size_dict, optimize, ...)

_find_tree_tree(inputs, output, size_dict, optimize, ...)

find_tree(inputs, output, size_dict[, optimize])

Find a contraction tree for the specific contraction, with fast dispatch

array_contract_tree(inputs[, output, size_dict, ...])

Get the ContractionTree for the tensor contraction specified by

_array_contract_expression_with_constants(inputs, ...)

_build_expression(inputs[, output, size_dict, ...])

array_contract_expression(inputs[, output, size_dict, ...])

Get an callable 'expression' that will contract tensors with indices and

array_contract(arrays, inputs[, output, optimize, ...])

Perform the tensor contraction specified by inputs, output and

einsum_tree(*args[, optimize, canonicalize, ...])

Get the ContractionTree for the einsum equation eq and

einsum_expression(*args[, optimize, constants, cache])

Get an callable 'expression' that will contract tensors with shapes

einsum(*args[, optimize, cache_expression, backend])

Perform an einsum contraction, using cotengra, using strategy given by

Attributes

cotengra.interface._PRESETS
cotengra.interface._COMPRESSED_PRESETS
cotengra.interface.register_preset(preset, optimizer, register_opt_einsum='auto', compressed=False)[source]

Register a preset optimizer.

cotengra.interface.preset_to_optimizer(preset)[source]
cotengra.interface.can_hash_optimize(cls)[source]

Check if the type of optimize supplied can be hashed.

cotengra.interface.identity(x)[source]
cotengra.interface._HASH_OPTIMIZE_PREPARERS
cotengra.interface.hash_prepare_optimize(optimize)[source]

Transform an optimize object into a hashable form.

cotengra.interface.hash_contraction(inputs, output, size_dict, optimize, **kwargs)[source]

Compute a hash key for the specified contraction.

cotengra.interface.normalize_input(inputs, output=None, size_dict=None, shapes=None, canonicalize=True)[source]

Parse a contraction definition, optionally canonicalizing the indices (mapping them into symbols beginning with 'a', 'b', 'c', ...), computing the output if not specified, and computing the size_dict from

cotengra.interface._find_path_handlers
cotengra.interface._find_path_explicit_path(inputs, output, size_dict, optimize)[source]
cotengra.interface._find_path_optimizer(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface._find_path_preset(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface._find_path_tree(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface.find_path(inputs, output, size_dict, optimize='auto', **kwargs)[source]

Directly find a contraction path for a given set of inputs and output.

Parameters:
  • inputs (Sequence[Sequence[str]]) – The inputs terms.

  • output (Sequence[str]) – The output term.

  • size_dict (dict[str, int]) – The size of each index.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

Returns:

path – The contraction path.

Return type:

tuple[tuple[int]]

cotengra.interface._PATH_CACHE
cotengra.interface.array_contract_path(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, cache=True)[source]

Find only the contraction path for the specific contraction, with fast dispatch of optimize, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer. The raw path is a more compact representation of the core tree structure but contains less information on its own, for example sliced indices are not included.

Parameters:
  • inputs (Sequence[Sequence[Hashable]]) – The inputs terms.

  • output (Sequence[Hashable], optional) – The output term.

  • size_dict (dict[Hashable, int], optional) – The size of each index, if given, shapes is ignored.

  • shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if size_dict not supplied.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

  • canonicalize (bool, optional) – If True, canonicalize the inputs and output so that the indices are relabelled 'a', 'b', 'c', ..., etc. in the order they appear.

  • cache (bool, optional) – If True, cache the path for the contraction, so that if the same pathfinding is performed multiple times the overhead is negated. Only for hashable optimize objects.

Returns:

path – The contraction path, whose interpretation is thus: the input tensors are assumed to be stored in a list, i.e. indexed by range(N). Each contraction in the path is a set of indices, the tensors at these locations should be popped from the list and then the result of the contraction appended.

Return type:

tuple[tuple[int]]

cotengra.interface._find_tree_explicit(inputs, output, size_dict, optimize)[source]
cotengra.interface._find_tree_optimizer_basic(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface._find_tree_preset(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface._find_tree_tree(inputs, output, size_dict, optimize, **kwargs)[source]
cotengra.interface._find_tree_handlers
cotengra.interface.find_tree(inputs, output, size_dict, optimize='auto', **kwargs)[source]

Find a contraction tree for the specific contraction, with fast dispatch of optimize, which can be a preset, path, tree, cotengra optimizer or opt_einsum optimizer.

Parameters:
  • inputs (Sequence[Sequence[str]]) – The inputs terms.

  • output (Sequence[str]) – The output term.

  • size_dict (dict[str, int]) – The size of each index.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

Returns:

tree

Return type:

ContractionTree

cotengra.interface.array_contract_tree(inputs, output=None, size_dict=None, shapes=None, optimize='auto', canonicalize=True, sort_contraction_indices=False)[source]

Get the ContractionTree for the tensor contraction specified by inputs, output and size_dict, with optimization strategy given by optimize. The tree can be used to inspect and also perform the contraction.

Parameters:
  • inputs (Sequence[Sequence[Hashable]]) – The inputs terms.

  • output (Sequence[Hashable], optional) – The output term.

  • size_dict (dict[Hashable, int], optional) – The size of each index, if given, shapes is ignored.

  • shapes (Sequence[tuple[int]], optional) – The shape of each input array. Needed if size_dict not supplied.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

  • canonicalize (bool, optional) – If True, canonicalize the inputs and output so that the indices are relabelled 'a', 'b', 'c', ..., etc. in the order they appear.

  • sort_contraction_indices (bool, optional) – If True, call tree.sort_contraction_indices().

Return type:

ContractionTree

class cotengra.interface.Variadic(fn, **kwargs)[source]

Wrapper to make non-variadic (i.e. with signature f(arrays)) function variadic (i.e. with signature f(*arrays)).

__slots__ = ('fn', 'kwargs')
__call__(*arrays, **kwargs)[source]
class cotengra.interface.Via(fn, convert_in, convert_out)[source]

Wrapper that applies one function to the input arrays and another to the output array. For example, moving the tensors from CPU to GPU and back.

__slots__ = ('fn', 'convert_in', 'convert_out')
__call__(*arrays, **kwargs)[source]
class cotengra.interface.WithBackend(fn)[source]

Wrapper to make any autoray written function take a backend kwarg, by simply using autoray.backend_like.

__slots__ = ('fn',)
__call__(*args, backend=None, **kwargs)[source]
cotengra.interface._array_contract_expression_with_constants(inputs, output, size_dict, constants, optimize='auto', implementation=None, prefer_einsum=False, autojit=False, via=None, sort_contraction_indices=False, cache=True)[source]
cotengra.interface._build_expression(inputs, output=None, size_dict=None, optimize='auto', implementation=None, prefer_einsum=False, autojit=False, via=None, sort_contraction_indices=False)[source]
cotengra.interface._CONTRACT_EXPR_CACHE
cotengra.interface.array_contract_expression(inputs, output=None, size_dict=None, shapes=None, optimize='auto', constants=None, canonicalize=True, cache=True, **kwargs)[source]

Get an callable ‘expression’ that will contract tensors with indices and shapes described by inputs and size_dict to output. The optimize kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true if optimize is an optimizer that can directly produce ContractionTree instances (i.e. has a .search() method).

Parameters:
  • inputs (Sequence[Sequence[Hashable]]) – The inputs terms.

  • output (Sequence[Hashable]) – The output term.

  • size_dict (dict[Hashable, int]) – The size of each index.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

    If the optimizer provides sliced indices they will be used.

  • constants (dict[int, array_like], optional) – A mapping of constant input positions to constant arrays. If given, the final expression will take only the remaining non-constant tensors as inputs. Note this is a different format to the constants kwarg of einsum_expression() since it also provides the constant arrays.

  • implementation (str or tuple[callable, callable], optional) –

    What library to use to actually perform the contractions. Options are:

    • None: let cotengra choose.

    • ”autoray”: dispatch with autoray, using the tensordot and

      einsum implementation of the backend.

    • ”cotengra”: use the tensordot and einsum implementation

      of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide tensordot and einsum to be used.

    • ”cuquantum”: use the cuquantum library to perform the whole

      contraction (not just individual contractions).

    • tuple[callable, callable]: manually supply the tensordot and

      einsum implementations to use.

  • autojit (bool, optional) – If True, use autoray.autojit() to compile the contraction function.

  • via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.

  • sort_contraction_indices (bool, optional) – If True, call tree.sort_contraction_indices() before constructing the contraction function.

  • cache (bool, optional) – If True, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable optimize objects.

Returns:

expr – A callable, signature expr(*arrays) that will contract arrays with shapes shapes.

Return type:

callable

cotengra.interface.array_contract(arrays, inputs, output=None, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]

Perform the tensor contraction specified by inputs, output and size_dict, using strategy given by optimize. By default the path finding and expression building is cached, so that if the a matching contraction is performed multiple times the overhead is negated.

Parameters:
  • arrays (Sequence[array_like]) – The arrays to contract.

  • inputs (Sequence[Sequence[Hashable]]) – The inputs terms.

  • output (Sequence[Hashable]) – The output term.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

    If the optimizer provides sliced indices they will be used.

  • cache_expression (bool, optional) – If True, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable optimize objects.

  • backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.

  • kwargs – Passed to array_contract_expression().

Return type:

array_like

cotengra.interface.einsum_tree(*args, optimize='auto', canonicalize=False, sort_contraction_indices=False)[source]

Get the ContractionTree for the einsum equation eq and optimization strategy optimize. The tree can be used to inspect and also perform the contraction.

Parameters:
  • eq (str) – The equation to use for contraction, for example 'ab,bc->ac'.

  • shapes (Sequence[tuple[int]]) – The shape of each input array.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

  • canonicalize (bool, optional) – If True, canonicalize the inputs and output so that the indices are relabelled 'a', 'b', 'c', ..., etc. in the order they appear.

  • sort_contraction_indices (bool, optional) – If True, call tree.sort_contraction_indices().

Return type:

ContractionTree

cotengra.interface.einsum_expression(*args, optimize='auto', constants=None, cache=True, **kwargs)[source]

Get an callable ‘expression’ that will contract tensors with shapes shapes according to equation eq. The optimize kwarg can be a path, optimizer or also a contraction tree. In the latter case sliced indices for example will be used if present. The same is true if optimize is an optimizer that can directly produce ContractionTree instances (i.e. has a .search() method).

Parameters:
  • eq (str) – The equation to use for contraction, for example 'ab,bc->ac'. The output will be automatically computed if not supplied, but Ellipses (’…’) are not supported.

  • shapes (Sequence[tuple[int]]) – The shapes of the tensors to contract, or the constant tensor itself if marked as constant in constants.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

    If the optimizer provides sliced indices they will be used.

  • constants (Sequence of int, optional) – The indices of tensors to treat as constant, the final expression will take the remaining non-constant tensors as inputs. Note this is a different format to the constants kwarg of array_contract_expression() since the actual constant arrays are inserted into shapes.

  • implementation (str or tuple[callable, callable], optional) –

    What library to use to actually perform the contractions. Options are:

    • None: let cotengra choose.

    • ”autoray”: dispatch with autoray, using the tensordot and

      einsum implementation of the backend.

    • ”cotengra”: use the tensordot and einsum implementation

      of cotengra, which is based on batch matrix multiplication. This is faster for some backends like numpy, and also enables libraries which don’t yet provide tensordot and einsum to be used.

    • ”cuquantum”: use the cuquantum library to perform the whole

      contraction (not just individual contractions).

    • tuple[callable, callable]: manually supply the tensordot and

      einsum implementations to use.

  • autojit (bool, optional) – If True, use autoray.autojit() to compile the contraction function.

  • via (tuple[callable, callable], optional) – If given, the first function will be applied to the input arrays and the second to the output array. For example, moving the tensors from CPU to GPU and back.

  • sort_contraction_indices (bool, optional) – If True, call tree.sort_contraction_indices() before constructing the contraction function.

  • cache (bool, optional) – If True, cache the contraction expression. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable optimize objects.

Returns:

expr – A callable, signature expr(*arrays) that will contract arrays with shapes matching shapes.

Return type:

callable

cotengra.interface.einsum(*args, optimize='auto', cache_expression=True, backend=None, **kwargs)[source]

Perform an einsum contraction, using cotengra, using strategy given by optimize. By default the path finding and expression building is cached, so that if a matching contraction is performed multiple times the overhead is negated.

Parameters:
  • eq (str) – The equation to use for contraction, for example 'ab,bc->ac'.

  • arrays (Sequence[array_like]) – The arrays to contract.

  • optimize (str, path_like, PathOptimizer, or ContractionTree) –

    The optimization strategy to use. This can be:

    • A string preset, e.g. 'auto', 'greedy', 'optimal'.

    • A PathOptimizer instance.

    • An explicit path, e.g. [(0, 1), (2, 3), ...].

    • An explicit ContractionTree instance.

    If the optimizer provides sliced indices they will be used.

  • cache_expression (bool, optional) – If True, cache the expression used to contract the arrays. This negates the overhead of pathfinding and building the expression when a contraction is performed multiple times. Only for hashable optimize objects.

  • backend (str, optional) – If given, the explicit backend to use for the contraction, by default the backend is dispatched automatically.

  • kwargs – Passed to array_contract_expression().

Return type:

array_like