cotengra.pathfinders.path_basic

Basic optimization routines.

Module Contents

Classes

ContractionProcessor

A helper class for combining bottom up simplifications, greedy, and

EnsureInputsOutputAreSequence

GreedyOptimizer

Class interface to the greedy optimizer which can be instantiated with

RandomGreedyOptimizer

Lightweight random greedy optimizer, that eschews hyper parameter

OptimalOptimizer

Class interface to the optimal optimizer which can be instantiated with

Functions

is_simplifiable(legs, appearances)

Check if legs contains any diag (repeated) or reduced (appears

compute_simplified(legs, appearances)

Compute the diag and reduced legs of a term. This function assumes that

compute_contracted(ilegs, jlegs, appearances)

Compute the contracted legs of two terms.

compute_size(legs, sizes)

Compute the size of a term.

compute_flops(ilegs, jlegs, sizes)

Compute the flops cost of contracting two terms.

compute_con_cost_flops(temp_legs, appearances, sizes, ...)

Compute the total flops cost of a contraction given by temporary legs,

compute_con_cost_size(temp_legs, appearances, sizes, ...)

Compute the max size of a contraction given by temporary legs, also

compute_con_cost_write(temp_legs, appearances, sizes, ...)

Compute the total write cost of a contraction given by temporary legs,

compute_con_cost_combo(temp_legs, appearances, sizes, ...)

Compute the combined total flops and write cost of a contraction given

compute_con_cost_limit(temp_legs, appearances, sizes, ...)

Compute the combined total flops and write cost of a contraction given

parse_minimize_for_optimal(minimize)

Given a string, parse it into a function that computes the cost of a

linear_to_ssa(path[, N])

Convert a path with recycled linear ids to a path with static single

ssa_to_linear(ssa_path[, N])

Convert a path with static single assignment ids to a path with recycled

is_ssa_path(path, nterms)

Check if an explicitly given path is in 'static single assignment' form.

optimize_simplify(inputs, output, size_dict[, use_ssa])

Find the (likely only partial) contraction path corresponding to

optimize_greedy(inputs, output, size_dict[, costmod, ...])

Find a contraction path using a greedy algorithm.

optimize_random_greedy_track_flops(inputs, output, ...)

Perform a batch of random greedy optimizations, simulteneously tracking

optimize_optimal(inputs, output, size_dict[, ...])

Find the optimal contraction path using a dynamic programming

get_optimize_greedy([accel])

get_optimize_random_greedy_track_flops([accel])

get_optimize_optimal([accel])

cotengra.pathfinders.path_basic.is_simplifiable(legs, appearances)[source]

Check if legs contains any diag (repeated) or reduced (appears nowhere else) indices.

cotengra.pathfinders.path_basic.compute_simplified(legs, appearances)[source]

Compute the diag and reduced legs of a term. This function assumes that the legs are already sorted. It handles the case where a index is both diag and reduced (i.e. traced).

cotengra.pathfinders.path_basic.compute_contracted(ilegs, jlegs, appearances)[source]

Compute the contracted legs of two terms.

cotengra.pathfinders.path_basic.compute_size(legs, sizes)[source]

Compute the size of a term.

cotengra.pathfinders.path_basic.compute_flops(ilegs, jlegs, sizes)[source]

Compute the flops cost of contracting two terms.

cotengra.pathfinders.path_basic.compute_con_cost_flops(temp_legs, appearances, sizes, iscore, jscore)[source]

Compute the total flops cost of a contraction given by temporary legs, also removing any contracted indices from the temporary legs.

cotengra.pathfinders.path_basic.compute_con_cost_size(temp_legs, appearances, sizes, iscore, jscore)[source]

Compute the max size of a contraction given by temporary legs, also removing any contracted indices from the temporary legs.

cotengra.pathfinders.path_basic.compute_con_cost_write(temp_legs, appearances, sizes, iscore, jscore)[source]

Compute the total write cost of a contraction given by temporary legs, also removing any contracted indices from the temporary legs.

cotengra.pathfinders.path_basic.compute_con_cost_combo(temp_legs, appearances, sizes, iscore, jscore, factor)[source]

Compute the combined total flops and write cost of a contraction given by temporary legs, also removing any contracted indices from the temporary legs. The combined cost is given by:

cost = flops + factor * size

cotengra.pathfinders.path_basic.compute_con_cost_limit(temp_legs, appearances, sizes, iscore, jscore, factor)[source]

Compute the combined total flops and write cost of a contraction given by temporary legs, also removing any contracted indices from the temporary legs. The combined cost is given by:

cost = max(flops, factor * size)

I.e. assuming one or another to be the limiting factor.

cotengra.pathfinders.path_basic.parse_minimize_for_optimal(minimize)[source]

Given a string, parse it into a function that computes the cost of a contraction. The string can be one of the following:

  • “flops”: compute_con_cost_flops

  • “size”: compute_con_cost_size

  • “write”: compute_con_cost_write

  • “combo”: compute_con_cost_combo

  • “combo-{factor}”: compute_con_cost_combo with specified factor

  • “limit”: compute_con_cost_limit

  • “limit-{factor}”: compute_con_cost_limit with specified factor

This function is cached for speed.

class cotengra.pathfinders.path_basic.ContractionProcessor(inputs, output, size_dict, track_flops=False)[source]

A helper class for combining bottom up simplifications, greedy, and optimal contraction path optimization.

__slots__ = ('nodes', 'edges', 'indmap', 'appearances', 'sizes', 'ssa', 'ssa_path', 'track_flops', 'flops')
copy()[source]
neighbors(i)[source]

Get all neighbors of node i.

print_current_terms()[source]
remove_ix(ix)[source]

Drop the index ix, simply removing it from all nodes and the edgemap.

pop_node(i)[source]

Remove node i from the graph, updating the edgemap and returning the legs of the node.

add_node(legs)[source]

Add a new node to the graph, updating the edgemap and returning the node index of the new node.

check()[source]

Check that the current graph is valid, useful for debugging.

contract_nodes(i, j, new_legs=None)[source]

Contract the nodes i and j, adding a new node to the graph and returning its index.

simplify_batch()[source]

Find any indices that appear in all terms and remove them, since they simply add an constant factor to the cost of the contraction, but create a fully connected graph if left.

simplify_single_terms()[source]

Take any diags, reductions and traces of single terms.

simplify_scalars()[source]

Remove all scalars, contracting them into the smallest remaining node, if there is one.

simplify_hadamard()[source]
simplify()[source]
subgraphs()[source]
optimize_greedy(costmod=1.0, temperature=0.0, seed=None)[source]
optimize_optimal_connected(where, minimize='flops', cost_cap=2, search_outer=False)[source]
optimize_optimal(minimize='flops', cost_cap=2, search_outer=False)[source]
optimize_remaining_by_size()[source]

This function simply contracts remaining terms in order of size, and is meant to handle the disconnected terms left after greedy or optimal optimization.

cotengra.pathfinders.path_basic.linear_to_ssa(path, N=None)[source]

Convert a path with recycled linear ids to a path with static single assignment ids. For example:

>>> linear_to_ssa([(0, 3), (1, 2), (0, 1)])
[(0, 3), (2, 4), (1, 5)]
cotengra.pathfinders.path_basic.ssa_to_linear(ssa_path, N=None)[source]

Convert a path with static single assignment ids to a path with recycled linear ids. For example:

>>> ssa_to_linear([(0, 3), (2, 4), (1, 5)])
[(0, 3), (1, 2), (0, 1)]
cotengra.pathfinders.path_basic.is_ssa_path(path, nterms)[source]

Check if an explicitly given path is in ‘static single assignment’ form.

cotengra.pathfinders.path_basic.optimize_simplify(inputs, output, size_dict, use_ssa=False)[source]

Find the (likely only partial) contraction path corresponding to simplifications only. Those simplifiactions are:

  • ignore any indices that appear in all terms

  • combine any repeated indices within a single term

  • reduce any non-output indices that only appear on a single term

  • combine any scalar terms

  • combine any tensors with matching indices (hadamard products)

Parameters:
  • inputs (tuple[tuple[str]]) – The indices of each input tensor.

  • output (tuple[str]) – The indices of the output tensor.

  • size_dict (dict[str, int]) – A dictionary mapping indices to their dimension.

  • use_ssa (bool, optional) – Whether to return the contraction path in ‘SSA’ format (i.e. as if each intermediate is appended to the list of inputs, without removals).

Returns:

path – The contraction path, given as a sequence of pairs of node indices.

Return type:

list[list[int]]

cotengra.pathfinders.path_basic.optimize_greedy(inputs, output, size_dict, costmod=1.0, temperature=0.0, simplify=True, use_ssa=False)[source]

Find a contraction path using a greedy algorithm.

Parameters:
  • inputs (tuple[tuple[str]]) – The indices of each input tensor.

  • output (tuple[str]) – The indices of the output tensor.

  • size_dict (dict[str, int]) – A dictionary mapping indices to their dimension.

  • costmod (float, optional) –

    When assessing local greedy scores how much to weight the size of the tensors removed compared to the size of the tensor added:

    score = size_ab - costmod * (size_a + size_b)
    

    This can be a useful hyper-parameter to tune.

  • temperature (float, optional) –

    When asessing local greedy scores, how much to randomly perturb the score. This is implemented as:

    score -> sign(score) * log(|score|) - temperature * gumbel()
    

    which implements boltzmann sampling.

  • simplify (bool, optional) –

    Whether to perform simplifications before optimizing. These are:

    • ignore any indices that appear in all terms

    • combine any repeated indices within a single term

    • reduce any non-output indices that only appear on a single term

    • combine any scalar terms

    • combine any tensors with matching indices (hadamard products)

    Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form.

  • use_ssa (bool, optional) – Whether to return the contraction path in ‘single static assignment’ (SSA) format (i.e. as if each intermediate is appended to the list of inputs, without removals). This can be quicker and easier to work with than the ‘linear recycled’ format that numpy and opt_einsum use.

Returns:

path – The contraction path, given as a sequence of pairs of node indices.

Return type:

list[list[int]]

cotengra.pathfinders.path_basic.optimize_random_greedy_track_flops(inputs, output, size_dict, ntrials=1, costmod=1.0, temperature=0.01, seed=None, simplify=True, use_ssa=False)[source]

Perform a batch of random greedy optimizations, simulteneously tracking the best contraction path in terms of flops, so as to avoid constructing a separate contraction tree.

Parameters:
  • inputs (tuple[tuple[str]]) – The indices of each input tensor.

  • output (tuple[str]) – The indices of the output tensor.

  • size_dict (dict[str, int]) – A dictionary mapping indices to their dimension.

  • ntrials (int, optional) – The number of random greedy trials to perform. The default is 1.

  • costmod (float, optional) –

    When assessing local greedy scores how much to weight the size of the tensors removed compared to the size of the tensor added:

    score = size_ab - costmod * (size_a + size_b)
    

    This can be a useful hyper-parameter to tune.

  • temperature (float, optional) –

    When asessing local greedy scores, how much to randomly perturb the score. This is implemented as:

    score -> sign(score) * log(|score|) - temperature * gumbel()
    

    which implements boltzmann sampling.

  • seed (int, optional) – The seed for the random number generator.

  • simplify (bool, optional) –

    Whether to perform simplifications before optimizing. These are:

    • ignore any indices that appear in all terms

    • combine any repeated indices within a single term

    • reduce any non-output indices that only appear on a single term

    • combine any scalar terms

    • combine any tensors with matching indices (hadamard products)

    Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form.

  • use_ssa (bool, optional) – Whether to return the contraction path in ‘single static assignment’ (SSA) format (i.e. as if each intermediate is appended to the list of inputs, without removals). This can be quicker and easier to work with than the ‘linear recycled’ format that numpy and opt_einsum use.

Returns:

  • path (list[list[int]]) – The best contraction path, given as a sequence of pairs of node indices.

  • flops (float) – The flops (/ contraction cost / number of multiplications), of the best contraction path, given log10.

cotengra.pathfinders.path_basic.optimize_optimal(inputs, output, size_dict, minimize='flops', cost_cap=2, search_outer=False, simplify=True, use_ssa=False)[source]

Find the optimal contraction path using a dynamic programming algorithm (by default excluding outer products).

The algorithm is an optimized version of Phys. Rev. E 90, 033315 (2014) (preprint: https://arxiv.org/abs/1304.6112), adapted from the opt_einsum implementation.

Parameters:
  • inputs (tuple[tuple[str]]) – The indices of each input tensor.

  • output (tuple[str]) – The indices of the output tensor.

  • size_dict (dict[str, int]) – A dictionary mapping indices to their dimension.

  • minimize (str, optional) –

    How to compute the cost of a contraction. The default is “flops”. Can be one of:

    • ”flops”: minimize with respect to total operation count only (also known as contraction cost)

    • ”size”: minimize with respect to maximum intermediate size only (also known as contraction width)

    • ”write”: minimize with respect to total write cost only

    • ”combo” or “combo-{factor}”: minimize with respect sum of flops and write weighted by specified factor. If the factor is not given a default value is used.

    • ”limit” or “limit-{factor}”: minimize with respect to max (at each contraction) of flops or write weighted by specified factor. If the factor is not given a default value is used.

    ’combo’ is generally a good default in term of practical hardware performance, where both memory bandwidth and compute are limited.

  • cost_cap (float, optional) – The maximum cost of a contraction to initially consider. This acts like a sieve and is doubled at each iteration until the optimal path can be found, but supplying an accurate guess can speed up the algorithm.

  • search_outer (bool, optional) – Whether to allow outer products in the contraction path. The default is False. Especially when considering write costs, the fastest path is very unlikely to include outer products.

  • simplify (bool, optional) –

    Whether to perform simplifications before optimizing. These are:

    • ignore any indices that appear in all terms

    • combine any repeated indices within a single term

    • reduce any non-output indices that only appear on a single term

    • combine any scalar terms

    • combine any tensors with matching indices (hadamard products)

    Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form.

  • use_ssa (bool, optional) – Whether to return the contraction path in ‘single static assignment’ (SSA) format (i.e. as if each intermediate is appended to the list of inputs, without removals). This can be quicker and easier to work with than the ‘linear recycled’ format that numpy and opt_einsum use.

Returns:

path – The contraction path, given as a sequence of pairs of node indices.

Return type:

list[list[int]]

class cotengra.pathfinders.path_basic.EnsureInputsOutputAreSequence(f)[source]
__call__(inputs, output, *args, **kwargs)[source]
cotengra.pathfinders.path_basic.get_optimize_greedy(accel='auto')[source]
cotengra.pathfinders.path_basic.get_optimize_random_greedy_track_flops(accel='auto')[source]
class cotengra.pathfinders.path_basic.GreedyOptimizer(costmod=1.0, temperature=0.0, simplify=True, accel='auto')[source]

Bases: cotengra.oe.PathOptimizer

Class interface to the greedy optimizer which can be instantiated with default options.

__slots__ = ('costmod', 'temperature', 'simplify', '_optimize_fn')
maybe_update_defaults(**kwargs)[source]
ssa_path(inputs, output, size_dict, **kwargs)[source]
search(inputs, output, size_dict, **kwargs)[source]
__call__(inputs, output, size_dict, **kwargs)[source]
class cotengra.pathfinders.path_basic.RandomGreedyOptimizer(max_repeats=32, costmod=1.0, temperature=0.01, seed=None, simplify=True, accel='auto', parallel='auto')[source]

Bases: cotengra.oe.PathOptimizer

Lightweight random greedy optimizer, that eschews hyper parameter tuning and contraction tree construction. This is a stateful optimizer that should not be re-used on different contractions.

Parameters:
  • max_repeats (int, optional) – The number of random greedy trials to perform.

  • costmod (float, optional) –

    When assessing local greedy scores how much to weight the size of the tensors removed compared to the size of the tensor added:

    score = size_ab - costmod * (size_a + size_b)
    

    This can be a useful hyper-parameter to tune.

  • temperature (float, optional) –

    When asessing local greedy scores, how much to randomly perturb the score. This is implemented as:

    score -> sign(score) * log(|score|) - temperature * gumbel()
    

    which implements boltzmann sampling.

  • seed (int, optional) – The seed for the random number generator. Note that deterministic behavior is only guaranteed within the python or rust backend (the accel parameter) and parallel settings.

  • simplify (bool, optional) –

    Whether to perform simplifications before optimizing. These are:

    • ignore any indices that appear in all terms

    • combine any repeated indices within a single term

    • reduce any non-output indices that only appear on a single term

    • combine any scalar terms

    • combine any tensors with matching indices (hadamard products)

    Such simpifications may be required in the general case for the proper functioning of the core optimization, but may be skipped if the input indices are already in a simplified form.

  • accel (bool or str, optional) – Whether to use the accelerated cotengrust backend. If “auto” the backend is used if available.

  • parallel (bool or str, optional) – Whether to use parallel processing. If “auto” the default is to use threads if the accelerated backend is not used, and processes if it is.

best_ssa_path

The best contraction path found so far.

Type:

list[list[int]]

best_flops

The flops (/ contraction cost / number of multiplications) of the best contraction path found so far.

Type:

float

maybe_update_defaults(**kwargs)[source]
ssa_path(inputs, output, size_dict, **kwargs)[source]
search(inputs, output, size_dict, **kwargs)[source]
__call__(inputs, output, size_dict, **kwargs)[source]
cotengra.pathfinders.path_basic.get_optimize_optimal(accel='auto')[source]
class cotengra.pathfinders.path_basic.OptimalOptimizer(minimize='flops', cost_cap=2, search_outer=False, simplify=True, accel='auto')[source]

Bases: cotengra.oe.PathOptimizer

Class interface to the optimal optimizer which can be instantiated with default options.

__slots__ = ('minimize', 'cost_cap', 'search_outer', 'simplify', '_optimize_fn')
maybe_update_defaults(**kwargs)[source]
ssa_path(inputs, output, size_dict, **kwargs)[source]
search(inputs, output, size_dict, **kwargs)[source]
__call__(inputs, output, size_dict, **kwargs)[source]