cotengra.scoring

Objects for defining and customizing the target cost of a contraction.

Module Contents

Classes

Objective

Base mixin class for all objectives.

ExactObjective

Mixin class for all exact objectives.

FlopsObjective

Objective that scores based on estimated floating point operations.

WriteObjective

Objective that scores based on estimated total write, i.e. the sum of

SizeObjective

Objective that scores based on maximum intermediate size.

ComboObjective

Objective that scores based on a combination of estimated floating point

LimitObjective

Objective that scores based on a maximum of either estimated floating

CompressedStatsTracker

CompressedStatsTrackerSize

CompressedStatsTrackerPeak

CompressedStatsTrackerWrite

CompressedStatsTrackerFlops

CompressedStatsTrackerCombo

CompressedObjective

Mixin for objectives that score based on a compressed contraction.

CompressedSizeObjective

Objective that scores based on the maximum size intermediate tensor

CompressedPeakObjective

Objective that scores based on the peak total concurrent size of

CompressedWriteObjective

Objective that scores based on the total cumulative size of

CompressedFlopsObjective

Objective that scores based on the total contraction flops

CompressedComboObjective

Mixin for objectives that score based on a compressed contraction.

MultiObjective

Base mixin class for all objectives.

MultiObjectiveDense

Number of intermediate configurations is expected to scale as if all

MultiObjectiveUniform

Number of intermediate configurations is expected to scale as if all

MultiObjectiveLinear

Number of intermediate configurations is expected to scale linearly with

Functions

ensure_basic_quantities_are_computed(trial)

parse_minimize(minimize)

_get_score_fn_str_cached(minimize)

get_score_fn(minimize)

expected_coupons(num_sub, num_total)

If we draw a random 'coupon` which can take num_sub different values

Attributes

cotengra.scoring.DEFAULT_COMBO_FACTOR = 64
class cotengra.scoring.Objective[source]

Base mixin class for all objectives.

__slots__ = ()
abstract __call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

__repr__()[source]

Return repr(self).

__hash__()[source]

Return hash(self).

cotengra.scoring.ensure_basic_quantities_are_computed(trial)[source]
class cotengra.scoring.ExactObjective[source]

Bases: Objective

Mixin class for all exact objectives.

abstract cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

abstract score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

abstract score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

abstract get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

class cotengra.scoring.FlopsObjective(secondary_weight=0.001)[source]

Bases: ExactObjective

Objective that scores based on estimated floating point operations.

Parameters:

secondary_weight (float, optional) – Weighting factor for secondary objectives (max size and total write). Default is 1e-3.

__slots__ = ('secondary_weight',)
cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.WriteObjective(secondary_weight=0.001)[source]

Bases: ExactObjective

Objective that scores based on estimated total write, i.e. the sum of sizes of all intermediates. This is relevant for completely memory-bound contractions, and also for back-propagation.

Parameters:

secondary_weight (float, optional) – Weighting factor for secondary objectives (max size and total flops). Default is 1e-3.

__slots__ = ('secondary_weight',)
cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.SizeObjective(secondary_weight=0.001)[source]

Bases: ExactObjective

Objective that scores based on maximum intermediate size.

Parameters:

secondary_weight (float, optional) – Weighting factor for secondary objectives (total flops and total write). Default is 1e-3.

__slots__ = ('secondary_weight',)
cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.ComboObjective(factor=DEFAULT_COMBO_FACTOR)[source]

Bases: ExactObjective

Objective that scores based on a combination of estimated floating point operations and total write, according to:

\[\log_2(\text{flops} + \alpha \times \text{write})\]

Where alpha is the factor parameter of this objective, that describes approximately how much slower write speeds are.

Parameters:

factor (float, optional) – Weighting factor for total write. Default is 64.

__slots__ = ('factor',)
cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.LimitObjective(factor=DEFAULT_COMBO_FACTOR)[source]

Bases: ExactObjective

Objective that scores based on a maximum of either estimated floating point operations or the total write, weighted by some factor:

\[\sum_{c} max(\text{flops}_i, \alpha \times \text{write}_i)\]

For each contraction $i$. Where alpha is the factor parameter of this objective, that describes approximately how much slower write speeds are. This assumes that one or the other is the limiting factor.

Parameters:

factor (float, optional) – Weighting factor for total write. Default is 64.

cost_local_tree_node(tree, node)[source]

The cost of a single node in tree, according to this objective. Used for subtree reconfiguration.

score_local(**kwargs)[source]

The score to give a single contraction, according to the given kwargs. Used in simulated_anneal.

score_slice_index(costs, ix)[source]

The score to give possibly slicing ix, according to the given costs. Used in the SliceFinder optimization.

get_dynamic_programming_minimize()[source]

Get the argument for optimal optimization, used in subtree reconfiguration.

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.CompressedStatsTracker(hg, chi)[source]
property combo_score
abstract property score
__slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...
copy()[source]
update_pre_step()[source]
update_pre_compress(hg, *nodes)[source]
update_post_compress(hg, *nodes)[source]
update_pre_contract(hg, i, j)[source]
update_post_contract(hg, ij)[source]
update_post_step()[source]
update_score(other)[source]
describe(join=' ')[source]
__repr__()[source]

Return repr(self).

class cotengra.scoring.CompressedStatsTrackerSize(hg, chi, secondary_weight=0.001)[source]

Bases: CompressedStatsTracker

property score
__slots__
class cotengra.scoring.CompressedStatsTrackerPeak(hg, chi, secondary_weight=0.001)[source]

Bases: CompressedStatsTracker

property score
__slots__
class cotengra.scoring.CompressedStatsTrackerWrite(hg, chi, secondary_weight=0.001)[source]

Bases: CompressedStatsTracker

property score
__slots__
class cotengra.scoring.CompressedStatsTrackerFlops(hg, chi, secondary_weight=0.001)[source]

Bases: CompressedStatsTracker

property score
__slots__
class cotengra.scoring.CompressedStatsTrackerCombo(hg, chi, factor=DEFAULT_COMBO_FACTOR)[source]

Bases: CompressedStatsTracker

property score
__slots__
class cotengra.scoring.CompressedObjective(chi='auto', compress_late=False)[source]

Bases: Objective

Mixin for objectives that score based on a compressed contraction.

abstract get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

compute_compressed_stats(trial)[source]
class cotengra.scoring.CompressedSizeObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]

Bases: CompressedObjective

Objective that scores based on the maximum size intermediate tensor during a compressed contraction with maximum bond dimension chi.

Parameters:
  • chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is "auto", which will use the square of the maximum size of any input tensor dimension.

  • compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.

  • secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and write). Default is 1e-3.

__slots__ = ('chi', 'compress_late', 'secondary_weight')
get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.CompressedPeakObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]

Bases: CompressedObjective

Objective that scores based on the peak total concurrent size of intermediate tensors during a compressed contraction with maximum bond dimension chi.

Parameters:
  • chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is "auto", which will use the square of the maximum size of any input tensor dimension.

  • compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.

  • secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and write). Default is 1e-3.

__slots__ = ('chi', 'compress_late', 'secondary_weight')
get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.CompressedWriteObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]

Bases: CompressedObjective

Objective that scores based on the total cumulative size of intermediate tensors during a compressed contraction with maximum bond dimension chi.

Parameters:
  • chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is "auto", which will use the square of the maximum size of any input tensor dimension.

  • compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.

  • secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and peak size). Default is 1e-3.

__slots__ = ('chi', 'compress_late', 'secondary_weight')
get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.CompressedFlopsObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]

Bases: CompressedObjective

Objective that scores based on the total contraction flops intermediate tensors during a compressed contraction with maximum bond dimension chi.

Parameters:
  • chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is "auto", which will use the square of the maximum size of any input tensor dimension.

  • compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.

  • secondary_weight (float, optional) – Weighting factor for secondary objectives (write and peak size). Default is 1e-3.

__slots__ = ('chi', 'compress_late', 'secondary_weight')
get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

class cotengra.scoring.CompressedComboObjective(chi='auto', compress_late=False, factor=DEFAULT_COMBO_FACTOR)[source]

Bases: CompressedObjective

Mixin for objectives that score based on a compressed contraction.

__slots__ = ('chi', 'compress_late', 'factor')
get_compressed_stats_tracker(hg)[source]

Return a tracker for compressed contraction stats.

Parameters:

hg (Hypergraph) – The hypergraph to track stats for.

Returns:

The tracker.

Return type:

CompressedStatsTracker

__call__(trial)[source]

The core method that takes a trial generated by a contraction path driver and scores it to report to a hyper-optimizer. It might also update the parameters in the trial to reflect the desired cost.

cotengra.scoring.score_matcher
cotengra.scoring.parse_minimize(minimize)[source]
cotengra.scoring._get_score_fn_str_cached(minimize)[source]
cotengra.scoring.get_score_fn(minimize)[source]
class cotengra.scoring.MultiObjective(num_configs)[source]

Bases: Objective

Base mixin class for all objectives.

__slots__ = ('num_configs',)
abstract compute_mult(dims)[source]
estimate_node_mult(tree, node)[source]
estimate_node_cache_mult(tree, node, sliced_ind_ordering)[source]
class cotengra.scoring.MultiObjectiveDense(num_configs)[source]

Bases: MultiObjective

Number of intermediate configurations is expected to scale as if all configurations are present.

__slots__ = ('num_configs',)
compute_mult(dims)[source]
cotengra.scoring.expected_coupons(num_sub, num_total)[source]

If we draw a random ‘coupon` which can take num_sub different values num_total times, how many unique coupons will we expect?

class cotengra.scoring.MultiObjectiveUniform(num_configs)[source]

Bases: MultiObjective

Number of intermediate configurations is expected to scale as if all configurations are randomly draw from a uniform distribution.

__slots__ = ('num_configs',)
compute_mult(dims)[source]
class cotengra.scoring.MultiObjectiveLinear(num_configs, coeff=1)[source]

Bases: MultiObjective

Number of intermediate configurations is expected to scale linearly with respect to number of variable indices (e.g. VMC like ‘locally connected’ configurations).

__slots__ = ('num_configs', 'coeff')
compute_mult(dims)[source]