Visualization#

Various aspects of the contraction process can be visualized using the following functions.

%config InlineBackend.figure_formats = ['svg']
import cotengra as ctg

Hypergraph visualization#

The first visualization we can do is to visualize the hypergraph corresponding to the geometry of the tensor network or equation with HyperGraph.plot. Hyper edges are shown as a zero size vertex connecting the tensors they appear on, like a COPY-tensor:

inputs, output, shapes, size_dict = ctg.utils.rand_equation(
    6,
    2,
    n_out=1,
    n_hyper_in=1,
    n_hyper_out=1,
    seed=4,
)
print(ctg.utils.inputs_output_to_eq(inputs, output))
ctg.get_hypergraph(inputs, output, size_dict).plot();
abce,bdf,bdef,a,b,ab->ca
_images/3dd4e1bc3bfba28782e3072a1f5583adb65c6ad2de02b08b0767089e5937e942.svg

This simple contraction has five types of index:

  1. standard inner index - ‘e’ - which appear on exactly two tensors

  2. standard inner multi-indices - ‘d’, ‘f’ - which both appear on the same two tensors

  3. standard outer index - ‘c’ - which appears on exactly one tensor and the output

  4. hyper inner index - ‘b’ - which appears on more than two tensors

  5. hyper outer index - ‘a’ - which appears on multiple tensors and the output

The nodes and indices are assigned unique colors by default, with hyper indices shown as dashed lines.

Small tree visualization#

If the network is small enough and we have a ContractionTree for it, we can also visualize its entirety including all indices involved at each intermediate contraction using the ContractionTree.plot_flat method:

tree = ctg.array_contract_tree(inputs, output, size_dict)
tree.plot_flat();
_images/f57a78811c6bb12655e94449e43b2edccbd16e045ca893350d3c310b4b78e7a6.svg

Here the unique node and index coloring by default matches that of the default hypergraph visualization. The contraction flows from bottom to top.

For the remaining examples, we’ll generate a larger 2D lattice contraction, with no outputs or hyper indices:

# generate an equation representing a 2D lattice
inputs, output, shapes, size_dict = ctg.utils.lattice_equation([5, 6])
hg = ctg.get_hypergraph(inputs, output, size_dict)
hg.plot(draw_edge_labels=True);
_images/cedfc5f032414af4bf960e1d998c6b74ff1aaffbc18f43bfdd0819811ba4144f.svg

You can turn off the node/edge coloring or set the node coloring to a simple centrality measure like so:

hg.plot(node_color="centrality", edge_color=False);
_images/f9f63a9d80b7a78fb62affa1b0385b044da833c559395e4c3eda2f7492711021.svg

Optimizer trials visualization#

If we run an hyper optimizer, we can visualize how the scores progress with trials using the HyperOptimizer.plot_trials method:

opt = ctg.HyperOptimizer(methods=['greedy', 'kahypar', 'labels'], progbar=True)

# run it and generate a tree
tree = opt.search(inputs, output, size_dict)
log10[FLOPS]=3.46 log10[COMBO]=4.719 log2[SIZE]=6 log2[PEAK]=8.409: 100%|█| 128/128 [00

By default the y-axis is the objective score, but you can specify e.g. 'flops'’:

opt.plot_trials(y='flops');
_images/af0fd8225435153bd214da2e9540ecba8a2f6dbc023deaa9d0a6f9db0102b9c6.svg

Similarly, you can supply x="time" to plot the scores as a function of cumulative CPU time.

We can also plot the distribution of contraction costs against contraction widths using the HyperOptimizer.plot_scatter method:

opt.plot_scatter(x='size', y='flops');
_images/7098bec483cefd2054539b1d4e6806175ec6e3282b997e4591bd4f9fa4e444b2.svg

Large Tree visualizations#

The following visualization functions are available for inspecting a single, complete ContractionTree once generated. They mostly wrap plot_tree, where you can see most of the extra options.

Contractions#

tree.plot_contractions gives you an overview of the memory and costs throughout the contraction:

tree.plot_contractions();
_images/d1224d6e84c077e27a11d8ad6b0f8384ce6c6904af15f868abd435e0888cac8f.svg

Here, peak is the memory required for all intermediates to be stored at once, write is the size of the new intermedite tensor, the max of which is the contraction width. cost is the scalar operations of each contraction.

The list of corresponding pairwise contractions can be explicitly shown with the print_contractions method:

tree.print_contractions()
(0) cost: 1.6e+01 widths: 3.0,2.0->3.0 type: tensordot
inputs: ih[j],k[j]->
output: ih(k)

(1) cost: 3.2e+01 widths: 3.0,3.0->4.0 type: tensordot
inputs: ih[k],[k]vu->
output: ih(vu)

(2) cost: 6.4e+01 widths: 4.0,4.0->4.0 type: tensordot
inputs: [i]hv[u],[i]ts[u]->
output: hv(ts)

(3) cost: 6.4e+01 widths: 4.0,3.0->5.0 type: tensordot
inputs: h[v]ts,[v]GF->
output: hts(GF)

(4) cost: 1.3e+02 widths: 5.0,4.0->5.0 type: tensordot
inputs: h[t]sG[F],[t]ED[F]->
output: hsG(ED)

(5) cost: 1.6e+01 widths: 3.0,2.0->3.0 type: tensordot
inputs: PV[W],R[W]->
output: PV(R)

(6) cost: 3.2e+01 widths: 3.0,3.0->4.0 type: tensordot
inputs: PV[R],G[R]Q->
output: PV(GQ)

(7) cost: 6.4e+01 widths: 4.0,4.0->4.0 type: tensordot
inputs: [P]VG[Q],E[P]O[Q]->
output: VG(EO)

(8) cost: 1.3e+02 widths: 5.0,4.0->5.0 type: tensordot
inputs: hs[GE]D,V[GE]O->
output: hsD(VO)

(9) cost: 6.4e+01 widths: 4.0,3.0->5.0 type: tensordot
inputs: C[N]MO,[N]UV->
output: CMO(UV)

(10) cost: 2.6e+02 widths: 5.0,5.0->6.0 type: tensordot
inputs: hsD[VO],CM[O]U[V]->
output: hsD(CMU)

(11) cost: 2.6e+02 widths: 6.0,4.0->6.0 type: tensordot
inputs: hs[DC]MU,r[C]B[D]->
output: hsMU(rB)

(12) cost: 6.4e+01 widths: 3.0,4.0->5.0 type: tensordot
inputs: [g]fh,[g]rqs->
output: fh(rqs)

(13) cost: 2.6e+02 widths: 6.0,5.0->5.0 type: tensordot
inputs: [hs]MU[r]B,f[hr]q[s]->
output: MUB(fq)

(14) cost: 1.6e+01 widths: 3.0,2.0->3.0 type: tensordot
inputs: w[H]I,[H]S->
output: wI(S)

(15) cost: 3.2e+01 widths: 3.0,3.0->4.0 type: tensordot
inputs: wI[S],J[S]T->
output: wI(JT)

(16) cost: 6.4e+01 widths: 4.0,4.0->4.0 type: tensordot
inputs: w[IJ]T,y[JI]K->
output: wT(yK)

(17) cost: 6.4e+01 widths: 4.0,3.0->5.0 type: tensordot
inputs: [w]TyK,l[w]x->
output: TyK(lx)

(18) cost: 1.3e+02 widths: 5.0,4.0->5.0 type: tensordot
inputs: T[y]Kl[x],n[yx]z->
output: TKl(nz)

(19) cost: 6.4e+01 widths: 4.0,3.0->5.0 type: tensordot
inputs: A[L]KM,[L]TU->
output: AKM(TU)

(20) cost: 2.6e+02 widths: 5.0,5.0->6.0 type: tensordot
inputs: [TK]lnz,A[K]M[T]U->
output: lnz(AMU)

(21) cost: 2.6e+02 widths: 6.0,4.0->6.0 type: tensordot
inputs: ln[zA]MU,p[Az]B->
output: lnMU(pB)

(22) cost: 2.6e+02 widths: 5.0,6.0->5.0 type: tensordot
inputs: [MUB]fq,ln[MU]p[B]->
output: fq(lnp)

(23) cost: 1.3e+02 widths: 5.0,4.0->5.0 type: tensordot
inputs: f[q]ln[p],e[p]o[q]->
output: fln(eo)

(24) cost: 6.4e+01 widths: 5.0,3.0->4.0 type: tensordot
inputs: [f]ln[e]o,[e]d[f]->
output: lno(d)

(25) cost: 6.4e+01 widths: 4.0,4.0->4.0 type: tensordot
inputs: l[no]d,c[n]m[o]->
output: ld(cm)

(26) cost: 3.2e+01 widths: 4.0,3.0->3.0 type: tensordot
inputs: [l]dc[m],a[lm]->
output: dc(a)

(27) cost: 1.6e+01 widths: 3.0,3.0->2.0 type: tensordot
inputs: [dc]a,[c]b[d]->
output: a(b)

(28) cost: 4.0e+00 widths: 2.0,2.0->0.0 type: tensordot
inputs: [ab],[ab]->
output: 


The indices are colored according to:

  1. blue - appears on left input and is kept

  2. green - appears on right input and is kept

  3. red - appears on both inputs and is contracted away

  4. purple - appears on both inputs and is kept (a ‘batch’ or hyper index)

Tent#

The most general purpose visualization for the structure of a ContractionTree is the ContractionTree.plot_tent method. This plots the input network (in grey) at the bottom, and the contraction tree intermediates laid out above. The width and color of the tree edges denote the intermediate tensor widths, and the size and color of the tree nodes denote the FLOPs required to contract each intermediate tensor:

tree.plot_tent();
_images/4092b4d3b07346f8fe026361854d49cb6e14726434164f61888e53c74f81d25a.svg

If you supply order=True then the intermediate nodes will be in the exact vertical order than they would be performed:

tree.plot_tent(order=True);
_images/adf1925658331939b93a80ba5f744e285284234792d85ffb88a0e81a4b8c69fc.svg

Note

If you have sliced indices, these will appear as dashed lines in the input graph.

Circuit#

If you want to plot only the tree with an emphasis on the order of operations then you can use the ContractionTree.plot_circuit method:

tree.plot_circuit();
_images/f62583f92f5807d935ff3c1f9d597ca9c864385d468445bfc66c3d0748865787.svg

Ring#

Another option is the ContractionTree.plot_ring method which lays out the input network on a ring, with the contraction tree intermediates laid out towards the center. The more arcs cross between branches the more expensive that contraction. This can be useful for inspecting how many ‘spines’ a contraction has or how balanced it is:

tree.plot_ring();
_images/e8cab3f88e4dd6246b29eab1b7f0830eea4f18ad2ba747422df6e5b79048d406.svg

Rubberband#

For small and close to planar graphs, an alternative visualization is the ContractionTree.plot_rubberband method. method using quimb. Here, nodes of the input graph are hierarchically grouped into bands according to the contraction tree. The order of contraction is represented by the colors:

tree.plot_rubberband();
_images/066b766961f162a4305914ec06110b947ef17a17d25dcc0c2ca22f866b040afe.svg

All of the above methods can be pretty extensively customized, including by supplying custom colormaps. They also return (fig, ax) for further customization or embedding in other plots.

inputs, output, shapes, size_dict = ctg.utils.lattice_equation([5, 5, 5])
opt = ctg.HyperOptimizer(progbar=True, reconf_opts={}, minimize='combo-256')
tree = opt.search(inputs, output, size_dict)
log10[FLOPS]=11.54 log10[COMBO]=11.99 log2[SIZE]=31 log2[PEAK]=32: 100%|█| 128/128 [00:
fig, ax = tree.plot_tent(
    raw_edge_alpha=0.0,
    edge_colormap='Greens',
    node_colormap='Greens',
    edge_scale=2,
    node_scale=0.5,
    colorbars=False,
    tree_root_height=-1.0,
    figsize=(10, 10),
)
_images/b4ba5f4f869dd5c1e57cd2975c695568db838e5a7e8ae669cc67ccef07a4c433.svg