cotengra.scoring¶
Objects for defining and customizing the target cost of a contraction.
Attributes¶
Classes¶
Base mixin class for all objectives. |
|
Mixin class for all exact objectives. |
|
Objective that scores based on estimated floating point operations. |
|
Objective that scores based on estimated total write, i.e. the sum of |
|
Objective that scores based on maximum intermediate size. |
|
Objective that scores based on a combination of estimated floating point |
|
Objective that scores based on a maximum of either estimated floating |
|
Mixin for objectives that score based on a compressed contraction. |
|
Objective that scores based on the maximum size intermediate tensor |
|
Objective that scores based on the peak total concurrent size of |
|
Objective that scores based on the total cumulative size of |
|
Objective that scores based on the total contraction flops |
|
Mixin for objectives that score based on a compressed contraction. |
|
Base mixin class for all objectives. |
|
Number of intermediate configurations is expected to scale as if all |
|
Number of intermediate configurations is expected to scale as if all |
|
Number of intermediate configurations is expected to scale linearly with |
Functions¶
|
|
|
|
|
|
|
If we draw a random 'coupon` which can take num_sub different values |
Module Contents¶
- cotengra.scoring.DEFAULT_COMBO_FACTOR = 64¶
- class cotengra.scoring.Objective[source]¶
Base mixin class for all objectives.
- __slots__ = ()¶
- class cotengra.scoring.ExactObjective[source]¶
Bases:
Objective
Mixin class for all exact objectives.
- abstract cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- abstract score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- class cotengra.scoring.FlopsObjective(secondary_weight=0.001)[source]¶
Bases:
ExactObjective
Objective that scores based on estimated floating point operations.
- Parameters:
secondary_weight (float, optional) – Weighting factor for secondary objectives (max size and total write). Default is 1e-3.
- __slots__ = ('secondary_weight',)¶
- secondary_weight = 0.001¶
- cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- score_slice_index(costs, ix)[source]¶
The score to give possibly slicing
ix
, according to the givencosts
. Used in theSliceFinder
optimization.
- class cotengra.scoring.WriteObjective(secondary_weight=0.001)[source]¶
Bases:
ExactObjective
Objective that scores based on estimated total write, i.e. the sum of sizes of all intermediates. This is relevant for completely memory-bound contractions, and also for back-propagation.
- Parameters:
secondary_weight (float, optional) – Weighting factor for secondary objectives (max size and total flops). Default is 1e-3.
- __slots__ = ('secondary_weight',)¶
- secondary_weight = 0.001¶
- cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- score_slice_index(costs, ix)[source]¶
The score to give possibly slicing
ix
, according to the givencosts
. Used in theSliceFinder
optimization.
- class cotengra.scoring.SizeObjective(secondary_weight=0.001)[source]¶
Bases:
ExactObjective
Objective that scores based on maximum intermediate size.
- Parameters:
secondary_weight (float, optional) – Weighting factor for secondary objectives (total flops and total write). Default is 1e-3.
- __slots__ = ('secondary_weight',)¶
- secondary_weight = 0.001¶
- cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- score_slice_index(costs, ix)[source]¶
The score to give possibly slicing
ix
, according to the givencosts
. Used in theSliceFinder
optimization.
- class cotengra.scoring.ComboObjective(factor=DEFAULT_COMBO_FACTOR)[source]¶
Bases:
ExactObjective
Objective that scores based on a combination of estimated floating point operations and total write, according to:
\[\log_2(\text{flops} + \alpha \times \text{write})\]Where alpha is the
factor
parameter of this objective, that describes approximately how much slower write speeds are.- Parameters:
factor (float, optional) – Weighting factor for total write. Default is 64.
- __slots__ = ('factor',)¶
- factor = 64¶
- cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- score_slice_index(costs, ix)[source]¶
The score to give possibly slicing
ix
, according to the givencosts
. Used in theSliceFinder
optimization.
- class cotengra.scoring.LimitObjective(factor=DEFAULT_COMBO_FACTOR)[source]¶
Bases:
ExactObjective
Objective that scores based on a maximum of either estimated floating point operations or the total write, weighted by some factor:
\[\sum_{c} max(\text{flops}_i, \alpha \times \text{write}_i)\]For each contraction $i$. Where alpha is the
factor
parameter of this objective, that describes approximately how much slower write speeds are. This assumes that one or the other is the limiting factor.- Parameters:
factor (float, optional) – Weighting factor for total write. Default is 64.
- factor = 64¶
- cost_local_tree_node(tree, node)[source]¶
The cost of a single
node
intree
, according to this objective. Used for subtree reconfiguration.
- score_local(**kwargs)[source]¶
The score to give a single contraction, according to the given
kwargs
. Used insimulated_anneal
.
- score_slice_index(costs, ix)[source]¶
The score to give possibly slicing
ix
, according to the givencosts
. Used in theSliceFinder
optimization.
- class cotengra.scoring.CompressedStatsTracker(hg, chi)[source]¶
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- total_size = 0¶
- total_size_post_contract = 0¶
- contracted_size = 0¶
- size_change = 0¶
- flops_change = 0¶
- flops = 0¶
- max_size = 0¶
- property combo_score¶
- abstract property score¶
- class cotengra.scoring.CompressedStatsTrackerSize(hg, chi, secondary_weight=0.001)[source]¶
Bases:
CompressedStatsTracker
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- secondary_weight = 0.001¶
- property score¶
- class cotengra.scoring.CompressedStatsTrackerPeak(hg, chi, secondary_weight=0.001)[source]¶
Bases:
CompressedStatsTracker
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- secondary_weight = 0.001¶
- property score¶
- class cotengra.scoring.CompressedStatsTrackerWrite(hg, chi, secondary_weight=0.001)[source]¶
Bases:
CompressedStatsTracker
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- secondary_weight = 0.001¶
- property score¶
- class cotengra.scoring.CompressedStatsTrackerFlops(hg, chi, secondary_weight=0.001)[source]¶
Bases:
CompressedStatsTracker
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- secondary_weight = 0.001¶
- property score¶
- class cotengra.scoring.CompressedStatsTrackerCombo(hg, chi, factor=DEFAULT_COMBO_FACTOR)[source]¶
Bases:
CompressedStatsTracker
- __slots__ = ('chi', 'flops', 'max_size', 'peak_size', 'write', 'total_size', 'total_size_post_contract',...¶
- factor = 64¶
- property score¶
- class cotengra.scoring.CompressedObjective(chi='auto', compress_late=False)[source]¶
Bases:
Objective
Mixin for objectives that score based on a compressed contraction.
- chi = 'auto'¶
- compress_late = False¶
- class cotengra.scoring.CompressedSizeObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]¶
Bases:
CompressedObjective
Objective that scores based on the maximum size intermediate tensor during a compressed contraction with maximum bond dimension
chi
.- Parameters:
chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is
"auto"
, which will use the square of the maximum size of any input tensor dimension.compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.
secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and write). Default is 1e-3.
- __slots__ = ('chi', 'compress_late', 'secondary_weight')¶
- secondary_weight = 0.001¶
- class cotengra.scoring.CompressedPeakObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]¶
Bases:
CompressedObjective
Objective that scores based on the peak total concurrent size of intermediate tensors during a compressed contraction with maximum bond dimension
chi
.- Parameters:
chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is
"auto"
, which will use the square of the maximum size of any input tensor dimension.compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.
secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and write). Default is 1e-3.
- __slots__ = ('chi', 'compress_late', 'secondary_weight')¶
- secondary_weight = 0.001¶
- class cotengra.scoring.CompressedWriteObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]¶
Bases:
CompressedObjective
Objective that scores based on the total cumulative size of intermediate tensors during a compressed contraction with maximum bond dimension
chi
.- Parameters:
chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is
"auto"
, which will use the square of the maximum size of any input tensor dimension.compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.
secondary_weight (float, optional) – Weighting factor for secondary objectives (flops and peak size). Default is 1e-3.
- __slots__ = ('chi', 'compress_late', 'secondary_weight')¶
- secondary_weight = 0.001¶
- class cotengra.scoring.CompressedFlopsObjective(chi='auto', compress_late=False, secondary_weight=0.001)[source]¶
Bases:
CompressedObjective
Objective that scores based on the total contraction flops intermediate tensors during a compressed contraction with maximum bond dimension
chi
.- Parameters:
chi (int, optional) – Maximum bond dimension to use for the compressed contraction. Default is
"auto"
, which will use the square of the maximum size of any input tensor dimension.compress_late (bool, optional) – Whether to compress the neighboring tensors just after (early) or just before (late) contracting tensors. Default is False, i.e. early.
secondary_weight (float, optional) – Weighting factor for secondary objectives (write and peak size). Default is 1e-3.
- __slots__ = ('chi', 'compress_late', 'secondary_weight')¶
- secondary_weight = 0.001¶
- class cotengra.scoring.CompressedComboObjective(chi='auto', compress_late=False, factor=DEFAULT_COMBO_FACTOR)[source]¶
Bases:
CompressedObjective
Mixin for objectives that score based on a compressed contraction.
- __slots__ = ('chi', 'compress_late', 'factor')¶
- factor = 64¶
- cotengra.scoring.score_matcher¶
- class cotengra.scoring.MultiObjective(num_configs)[source]¶
Bases:
Objective
Base mixin class for all objectives.
- __slots__ = ('num_configs',)¶
- num_configs¶
- class cotengra.scoring.MultiObjectiveDense(num_configs)[source]¶
Bases:
MultiObjective
Number of intermediate configurations is expected to scale as if all configurations are present.
- __slots__ = ('num_configs',)¶
- cotengra.scoring.expected_coupons(num_sub, num_total)[source]¶
If we draw a random ‘coupon` which can take num_sub different values num_total times, how many unique coupons will we expect?
- class cotengra.scoring.MultiObjectiveUniform(num_configs)[source]¶
Bases:
MultiObjective
Number of intermediate configurations is expected to scale as if all configurations are randomly draw from a uniform distribution.
- __slots__ = ('num_configs',)¶
- class cotengra.scoring.MultiObjectiveLinear(num_configs, coeff=1)[source]¶
Bases:
MultiObjective
Number of intermediate configurations is expected to scale linearly with respect to number of variable indices (e.g. VMC like ‘locally connected’ configurations).
- __slots__ = ('num_configs', 'coeff')¶
- coeff = 1¶