High Level

You can use cotengra as a drop-in replacement for numpy.einsum or opt_einsum.contract, and benefit from the improved optimization, contraction routines and features such as slicing, with the following high level interface functions.

Traditional einsum style where the contraction is specified as a compact and more human-readable equation string:

Programmatic style, where the indices can be specified as sequences of arbitrary hashable objects:

The following are equivalent ways to perform a matrix multiplication and transpose of x and y:

import cotengra as ctg

# einsum style
z = ctg.einsum("ab,bc->ca", x, y)

# programmatic style
z = ctg.array_contract(
  arrays=(x, y), 
  inputs=[(0, 1), (1, 2)], 
  output=(2, 0),
)

The {_tree} functions return a ContractionTree object which can be used to inspect the order and various properties such as contraction cost and width. The {_expression} functions return a function that performs the contraction, which can be called with any matching input arrays. All of these functions take an optimize kwarg which specifies the contraction strategy. It can be one of the following:

  • str : a preset such as 'auto', 'auto-hq', 'greedy' or 'optimal'

  • PathOptimizer : a custom optimizer from cotengra or opt_einsum

  • ContractionTree : a contraction tree generated previously

  • Sequence[tuple[int]] : an explicit path, specified manually or generated previously

If the method provides sliced indices then the contraction will utilize these to reduce the memory. The default is the cotengra preset 'auto'. If you explicitly want to use an opt_einsum preset then you can supply optimize='opt_einsum:auto' for example.

See the docstring of array_contract_expression for other options.

einsum interfaces:

Here the contraction is specified using the string equation form of numpy.einsum and other einsum implementations. For example:

  • "ab,bc->ac" matrix multiplication

  • "Xab,Xbc->Xac" batch matrix multiplication

  • "ab,ab->ab" hadamard (elementwise) product

Hint

cotengra supports all types of explicit equation (such as repeated and hyper/batch indices appearing any number of times). The einsum versions also support automatically expanded dimensions using ellipsis ....

If the right hand side is not specified then the output indices are computed as every index that appears once on the left hand side inputs, in sorted order. For example 'ba' is completed as a transposition: 'ba->ab'.

Lets generate a more non-trivial example:

%config InlineBackend.figure_formats = ['svg']
import cotengra as ctg

# generate the 'inputs and output' format contraction
inputs, output, shapes, size_dict = ctg.utils.lattice_equation([4, 5])

# generate the 'eq' format
eq = ctg.utils.inputs_output_to_eq(inputs, output)
print(eq)

# make example arrays
arrays = ctg.utils.make_arrays_from_inputs(inputs, size_dict, seed=42)
ab,cbd,edf,gfh,ih,ajk,clkm,enmo,gpoq,irq,jst,lutv,nwvx,pyxz,rAz,sB,uBC,wCD,yDE,AE->

And perform the contraction:

ctg.einsum(eq, *arrays)
array(776.48544934)

If we wanted to inspect the contraction before we perform it we could build the contraction tree first. Here we supply just the shapes of the arrays:

tree = ctg.einsum_tree(eq, *shapes)

# some typical quantities of interest:
tree.contract_stats()
{'flops': 964, 'write': 293, 'size': 32}

The contraction tree has many methods to inspect and manipulate the contraction.

tree.plot_rubberband()
_images/85e6980d5194d3c7cb10e86a0195535a3409c32029514869e0bb1b56e5b4b858.svg
(<Figure size 500x500 with 1 Axes>, <Axes: >)

Sometimes it is useful to generate the function that will perform the contraction for any given inputs. This can be done with the einsum_expression function:

expr = ctg.einsum_expression(eq, *shapes)
expr(*arrays)
array(776.48544934)
arrays = ctg.utils.make_arrays_from_inputs(inputs, size_dict, seed=43)
expr(*arrays)
array(2252.66569969)

Note

einsum itself caches the expressions it uses by default for presets, so at least in terms of performance savings this is often not necessary.

array_contract interfaces

The einsum format is less convenient for dynamically generated contractions. The alternate interface cotengra provides are the array_contract functions. These specify a contraction with the following three arguments:

- `inputs : Sequence[Sequence[Hashable]]`, the indices of each input
- `output : Sequence[Hashable]`, the output indices
- `size_dict : Mapping[Hashable, int]`, the size of each index

The indices are mapped (‘canonicalized’) into single letters in the order they appear on the inputs, allowing matching geometries to be cached.

If output is not specified it is calculated as the indices that appear once in inputs, in the order they appear in inputs. Note this is slightly different to einsum which sorts the output indices, since we can only require the indices to be hashable.

Note

You can also supply shapes to array_contract_tree and array_contract_expression rather than build size_dict manually.

As an example, we’ll directly contract the arrays from a tensor network generated with quimb:

import quimb.tensor as qtn

tn = qtn.TN_rand_reg(100, 3, D=2, seed=42)
tn.draw(edge_color=True)
tn
_images/0f61c19e13adcc7fc5b84a9728af0b2879e1261d23c0474be6e8869a396928fd.svg
TensorNetworkGen(tensors=100, indices=150)
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAC, _67af2fAAAAB, _67af2fAAAAA], tags={I0}),backend=numpy, dtype=float64, data=array([[[ 0.41832997, 0.60557617], [ 0.02878786, -1.084246 ]], [[ 1.46422098, 0.29072736], [-1.33075642, -0.03472346]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAF, _67af2fAAAAE, _67af2fAAAAD], tags={I1}),backend=numpy, dtype=float64, data=array([[[ 0.28041847, 0.10749307], [-1.92080086, 1.57864499]], [[ 1.00595719, 0.45121505], [-0.59343367, 0.09382112]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAI, _67af2fAAAAH, _67af2fAAAAG], tags={I2}),backend=numpy, dtype=float64, data=array([[[ 1.85195867, -0.25590475], [-0.28298637, 0.415816 ]], [[-1.08877401, -1.96729165], [ 0.88737846, -1.32823784]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAL, _67af2fAAAAK, _67af2fAAAAJ], tags={I3}),backend=numpy, dtype=float64, data=array([[[-0.13157981, -0.36196929], [ 0.7820311 , 0.28266399]], [[-1.00595013, 0.01851214], [-1.24315953, 2.60337585]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAN, _67af2fAAAAM, _67af2fAAAAB], tags={I4}),backend=numpy, dtype=float64, data=array([[[ 0.15139223, -0.51553062], [-0.2196374 , 0.40234591]], [[ 1.36128828, 0.74287737], [ 0.93685218, 0.17547031]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAQ, _67af2fAAAAP, _67af2fAAAAO], tags={I5}),backend=numpy, dtype=float64, data=array([[[ 1.52520418, 0.09821447], [-1.16490357, 0.52358791]], [[-1.06559789, -0.31079113], [ 0.5559524 , -0.09963476]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAT, _67af2fAAAAS, _67af2fAAAAR], tags={I6}),backend=numpy, dtype=float64, data=array([[[-0.25769078, -1.58951869], [-1.81491229, 0.53617305]], [[ 1.27138979, -0.55403891], [ 1.72433064, -0.31178569]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAW, _67af2fAAAAV, _67af2fAAAAU], tags={I7}),backend=numpy, dtype=float64, data=array([[[ 0.06331837, 1.38212765], [ 0.58472813, -0.50975014]], [[ 0.2513335 , 0.40621724], [ 0.8656376 , -0.53392518]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAZ, _67af2fAAAAY, _67af2fAAAAX], tags={I8}),backend=numpy, dtype=float64, data=array([[[-0.03877829, 1.14263416], [-0.46350628, 2.26692259]], [[-0.5287392 , 0.32461586], [-0.1544165 , -0.81960771]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAc, _67af2fAAAAb, _67af2fAAAAa], tags={I9}),backend=numpy, dtype=float64, data=array([[[-1.20293573, 0.09544837], [-1.3617434 , 0.27737016]], [[ 0.30665917, -1.40419209], [-1.53897176, 1.59692719]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAf, _67af2fAAAAe, _67af2fAAAAd], tags={I10}),backend=numpy, dtype=float64, data=array([[[ 1.26835706, -0.74444453], [-1.37903328, -0.37289418]], [[ 0.22521904, -0.7968999 ], [-0.19003288, 0.40520818]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAh, _67af2fAAAAg, _67af2fAAAAO], tags={I11}),backend=numpy, dtype=float64, data=array([[[-1.56670601, 1.62226236], [-0.55864531, 1.285027 ]], [[-0.64887992, 0.64242269], [ 2.17043334, -0.54919245]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAk, _67af2fAAAAj, _67af2fAAAAi], tags={I12}),backend=numpy, dtype=float64, data=array([[[ 0.03192423, -0.74439149], [ 1.30574661, 0.87000251]], [[ 0.79849565, 0.83417813], [-0.37099355, -0.67691076]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAn, _67af2fAAAAm, _67af2fAAAAl], tags={I13}),backend=numpy, dtype=float64, data=array([[[-0.51472382, -1.67062395], [ 1.15948922, 0.57182887]], [[-0.74479429, -0.29743726], [-1.31841887, -0.25337446]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAq, _67af2fAAAAp, _67af2fAAAAo], tags={I14}),backend=numpy, dtype=float64, data=array([[[-1.33318094, 1.5421289 ], [-0.30571997, -1.00505008]], [[ 1.12605809, -0.13951594], [ 0.20790904, -0.52911824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAs, _67af2fAAAAr, _67af2fAAAAk], tags={I15}),backend=numpy, dtype=float64, data=array([[[ 0.3674769 , 1.55241718], [ 0.82071427, 0.14560001]], [[ 0.19105139, 1.47534647], [ 0.11034297, -1.12803854]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAv, _67af2fAAAAu, _67af2fAAAAt], tags={I16}),backend=numpy, dtype=float64, data=array([[[ 0.55952408, 0.64356438], [-0.95897595, 0.25066991]], [[-0.01199984, 0.40876493], [-0.28508488, -0.42547401]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAx, _67af2fAAAAw, _67af2fAAAAF], tags={I17}),backend=numpy, dtype=float64, data=array([[[ 0.64288936, 0.15768144], [-1.26677523, 0.25459598]], [[ 0.74840043, 0.82142352], [-0.27199821, 0.87673783]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABA, _67af2fAAAAz, _67af2fAAAAy], tags={I18}),backend=numpy, dtype=float64, data=array([[[-0.96158539, -1.96465654], [ 0.3811346 , -0.52300382]], [[ 1.83535089, -1.43337649], [-0.59776266, -0.40870927]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABD, _67af2fAAABC, _67af2fAAABB], tags={I19}),backend=numpy, dtype=float64, data=array([[[ 0.68397816, -0.42211949], [-0.1650454 , 0.50112425]], [[-0.84532849, 0.5558993 ], [ 2.06932322, 0.1431952 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABE, _67af2fAAAAx, _67af2fAAAAH], tags={I20}),backend=numpy, dtype=float64, data=array([[[ 0.66338863, 1.06152374], [ 0.20970869, -0.1046703 ]], [[-0.11430137, 0.31375736], [-1.29794829, 0.71623433]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABF, _67af2fAAAAp, _67af2fAAAAU], tags={I21}),backend=numpy, dtype=float64, data=array([[[ 0.45682495, -0.06321074], [ 0.05871331, -0.21005493]], [[-0.48018253, -1.02270796], [-4.09288747, -1.06758198]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABH, _67af2fAAABG, _67af2fAAAAJ], tags={I22}),backend=numpy, dtype=float64, data=array([[[ 1.3837506 , 0.54885024], [-0.44582365, 0.27466897]], [[ 0.33042849, 0.20524129], [ 1.16451197, -0.47649984]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABJ, _67af2fAAABI, _67af2fAAAAe], tags={I23}),backend=numpy, dtype=float64, data=array([[[-1.2017799 , 0.05584251], [-0.98095115, -0.20473152]], [[ 0.2602656 , 0.05370239], [-1.63822223, 0.02741674]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABK, _67af2fAAAAg, _67af2fAAAAM], tags={I24}),backend=numpy, dtype=float64, data=array([[[ 1.77715336, -0.28063285], [ 0.8070599 , 0.04899588]], [[ 0.71085377, 1.32883486], [ 0.27595826, 2.20806878]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABM, _67af2fAAABL, _67af2fAAAAV], tags={I25}),backend=numpy, dtype=float64, data=array([[[-0.51588489, 0.81952451], [-0.83499587, -0.04589686]], [[-0.56797951, -0.62835871], [ 0.07071926, -0.88126988]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABP, _67af2fAAABO, _67af2fAAABN], tags={I26}),backend=numpy, dtype=float64, data=array([[[ 2.52843728, 0.42337001], [-0.84121891, -0.68472719]], [[ 1.20186868, -0.71322565], [-0.94592688, 0.34291128]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABS, _67af2fAAABR, _67af2fAAABQ], tags={I27}),backend=numpy, dtype=float64, data=array([[[-0.2171143 , 0.6168768 ], [-1.33975764, -0.11793883]], [[-0.01325147, -1.58992986], [ 1.97251808, 1.97324821]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABU, _67af2fAAABT, _67af2fAAABF], tags={I28}),backend=numpy, dtype=float64, data=array([[[ 0.79460659, -0.95131942], [-0.62124143, -0.4642856 ]], [[ 0.15497854, -1.07938866], [ 0.33352787, 1.24106592]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABW, _67af2fAAABV, _67af2fAAABJ], tags={I29}),backend=numpy, dtype=float64, data=array([[[ 0.74599076, -0.26003485], [ 2.1604772 , -0.49252775]], [[ 0.13855393, 0.04263864], [-0.38309191, 1.79375069]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABZ, _67af2fAAABY, _67af2fAAABX], tags={I30}),backend=numpy, dtype=float64, data=array([[[ 1.88481868, -0.0695583 ], [-0.60641381, 0.90302332]], [[ 0.41724814, 0.66447103], [-0.16009207, -1.67700053]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABc, _67af2fAAABb, _67af2fAAABa], tags={I31}),backend=numpy, dtype=float64, data=array([[[ 0.38689291, -1.0936539 ], [-0.13263404, 1.46114905]], [[ 0.12281697, 1.31833095], [ 1.83668743, -0.14130299]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABe, _67af2fAAABd, _67af2fAAAAD], tags={I32}),backend=numpy, dtype=float64, data=array([[[-0.31227164, 0.77419936], [ 0.13490232, 0.3719042 ]], [[-0.77619116, 1.05731633], [ 0.14662734, 0.88291095]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABh, _67af2fAAABg, _67af2fAAABf], tags={I33}),backend=numpy, dtype=float64, data=array([[[-0.67215056, 0.05571073], [-1.03576912, -1.28441133]], [[-1.33272379, 0.74270892], [-0.81299007, -0.27344781]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABj, _67af2fAAABi, _67af2fAAABf], tags={I34}),backend=numpy, dtype=float64, data=array([[[-0.8951296 , 1.26369747], [ 2.24931229, -0.82915018]], [[ 1.78783198, 0.02225253], [ 1.71827061, -0.72247824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABm, _67af2fAAABl, _67af2fAAABk], tags={I35}),backend=numpy, dtype=float64, data=array([[[-1.41833885, -0.17283426], [ 0.23216175, 0.63922096]], [[-0.53334221, 0.56416889], [ 0.64977318, -0.69770602]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABo, _67af2fAAABn, _67af2fAAABl], tags={I36}),backend=numpy, dtype=float64, data=array([[[ 1.34170008, -0.02143791], [ 1.16329068, -1.45774186]], [[ 0.77342921, 0.45629851], [-0.57959124, 1.35134186]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABp, _67af2fAAAAQ, _67af2fAAAAL], tags={I37}),backend=numpy, dtype=float64, data=array([[[-0.33926804, -0.29612723], [-0.03585335, -1.89036142]], [[-1.18933749, 0.21242447], [ 0.02652267, -0.27630153]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABs, _67af2fAAABr, _67af2fAAABq], tags={I38}),backend=numpy, dtype=float64, data=array([[[ 1.54389242, -0.46439793], [-0.03863546, -2.16611661]], [[-1.64557186, -1.37941789], [-0.02262371, 1.24117565]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABu, _67af2fAAABt, _67af2fAAABL], tags={I39}),backend=numpy, dtype=float64, data=array([[[-0.38368313, 0.62433256], [ 0.69722479, 1.0513984 ]], [[-0.73418882, -0.23299285], [-0.75439293, 0.17953454]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABw, _67af2fAAABv, _67af2fAAAAK], tags={I40}),backend=numpy, dtype=float64, data=array([[[ 0.05877584, -1.41417106], [-0.11633639, -0.26464487]], [[-2.84431328, 1.90038606], [ 0.25512732, -1.40045801]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABx, _67af2fAAABw, _67af2fAAAAb], tags={I41}),backend=numpy, dtype=float64, data=array([[[ 1.64707772, -0.81599775], [ 1.3777959 , 0.84016492]], [[-0.53326295, 0.37130353], [ 0.71351521, -0.18891297]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABy, _67af2fAAABZ, _67af2fAAABE], tags={I42}),backend=numpy, dtype=float64, data=array([[[-1.52825934, -0.48924217], [-1.53611577, -0.95677416]], [[-1.21581794, -0.40168864], [-0.03305816, -0.04687855]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACB, _67af2fAAACA, _67af2fAAABz], tags={I43}),backend=numpy, dtype=float64, data=array([[[ 0.78738459, -0.58063465], [-0.02645525, -0.76479373]], [[-0.71402249, 0.33955049], [-1.33190651, 0.044711 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACD, _67af2fAAACC, _67af2fAAABh], tags={I44}),backend=numpy, dtype=float64, data=array([[[-0.02008487, 0.98077792], [ 1.8685142 , 0.14106384]], [[-0.54273423, 1.75408101], [ 2.09132258, -0.01678777]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACC, _67af2fAAACA, _67af2fAAAAc], tags={I45}),backend=numpy, dtype=float64, data=array([[[ 0.03333921, 1.61852048], [-0.4632809 , 0.25798995]], [[ 0.82237162, 0.31547219], [-1.56309635, -0.71530198]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABu, _67af2fAAABQ, _67af2fAAABH], tags={I46}),backend=numpy, dtype=float64, data=array([[[ 0.12616229, 0.24306265], [-1.21393031, -0.03326173]], [[-1.56211681, 0.61171481], [-0.10485063, 1.25027403]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACE, _67af2fAAACD, _67af2fAAAAn], tags={I47}),backend=numpy, dtype=float64, data=array([[[ 0.82565929, 0.03612089], [-0.3786331 , -2.45768323]], [[ 0.5555774 , 0.05666101], [-0.96264526, -1.49662903]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACG, _67af2fAAACF, _67af2fAAAAa], tags={I48}),backend=numpy, dtype=float64, data=array([[[ 0.86181319, -0.39336792], [-0.21088054, -1.52093029]], [[-0.68659009, -0.04785194], [-0.04965364, -0.08275545]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACH, _67af2fAAABC, _67af2fAAAAr], tags={I49}),backend=numpy, dtype=float64, data=array([[[-1.11921376, 1.35283861], [ 1.16737392, -0.1649528 ]], [[ 0.41631827, -0.47844357], [ 0.25467172, -0.20455254]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACJ, _67af2fAAACI, _67af2fAAAAI], tags={I50}),backend=numpy, dtype=float64, data=array([[[ 0.57486997, 1.05275919], [ 0.97019501, 0.1345963 ]], [[-0.99522698, 0.37310525], [-0.15596329, 0.92255026]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACK, _67af2fAAABM, _67af2fAAAAi], tags={I51}),backend=numpy, dtype=float64, data=array([[[0.39814605, 1.30281331], [1.41125229, 0.76097404]], [[0.51736235, 0.34134732], [1.68736855, 0.07948978]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACM, _67af2fAAACL, _67af2fAAABD], tags={I52}),backend=numpy, dtype=float64, data=array([[[ 1.48196655, 0.41736173], [-2.08082564, -0.99010177]], [[ 0.49097649, 1.19824281], [-0.83588013, -0.111188 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACO, _67af2fAAACN, _67af2fAAABT], tags={I53}),backend=numpy, dtype=float64, data=array([[[ 0.93669257, 1.00292715], [-1.1592509 , 0.17165121]], [[-0.95672409, 0.44413637], [-0.24374346, 1.45953642]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACE, _67af2fAAAAj, _67af2fAAAAY], tags={I54}),backend=numpy, dtype=float64, data=array([[[ 0.62266766, 0.34755705], [-0.48303148, -1.09935182]], [[ 0.41925794, 1.06491403], [-0.41431341, -1.57496432]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACP, _67af2fAAABR, _67af2fAAABI], tags={I55}),backend=numpy, dtype=float64, data=array([[[-1.51382364, 0.36708892], [-1.62472029, 1.67648859]], [[ 0.98760363, -0.78162987], [-0.63497334, -0.62930064]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACR, _67af2fAAACQ, _67af2fAAAAv], tags={I56}),backend=numpy, dtype=float64, data=array([[[-0.42659068, 1.32173848], [-0.03978853, -0.10435659]], [[ 0.03709526, -0.6325428 ], [-1.27154466, 0.32421925]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACS, _67af2fAAACF, _67af2fAAABj], tags={I57}),backend=numpy, dtype=float64, data=array([[[ 0.27335833, 1.19676365], [ 0.6535591 , 0.08589129]], [[ 0.47844404, 0.221359 ], [-0.19211651, -0.23340824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACT, _67af2fAAACJ, _67af2fAAABy], tags={I58}),backend=numpy, dtype=float64, data=array([[[-0.4957821 , 0.64799884], [ 0.34294736, -0.46004224]], [[-0.09062532, -0.26219847], [-1.07842502, -1.74451027]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABP, _67af2fAAAAu, _67af2fAAAAm], tags={I59}),backend=numpy, dtype=float64, data=array([[[-0.77202293, -1.27249467], [ 0.84691268, -1.72488911]], [[ 0.99898935, 0.17427921], [-0.71807075, 0.22366337]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACV, _67af2fAAACU, _67af2fAAABb], tags={I60}),backend=numpy, dtype=float64, data=array([[[-0.00701465, -1.35254573], [ 0.55302397, -1.00111538]], [[ 0.4508672 , -0.6071619 ], [-1.72811239, 0.38531025]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACX, _67af2fAAACW, _67af2fAAAAT], tags={I61}),backend=numpy, dtype=float64, data=array([[[-1.0977206 , -1.21844911], [-1.0680861 , -0.81158037]], [[ 0.88338946, -0.61570669], [-0.07811817, 0.42106163]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABX, _67af2fAAABS, _67af2fAAABN], tags={I62}),backend=numpy, dtype=float64, data=array([[[-1.15124221, -0.64150838], [ 0.95555201, 0.99141828]], [[-1.18426722, 0.78212112], [ 1.3265312 , -0.02626513]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACG, _67af2fAAABp, _67af2fAAAAG], tags={I63}),backend=numpy, dtype=float64, data=array([[[-1.54744898, 0.25224055], [-1.62104324, -0.13977993]], [[-0.03909069, 1.58248776], [ 0.55074243, 0.60963354]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACa, _67af2fAAACZ, _67af2fAAACY], tags={I64}),backend=numpy, dtype=float64, data=array([[[-0.49185897, 0.16316046], [ 0.05892211, -0.63315672]], [[ 1.30455739, 0.02073824], [ 0.32342008, -0.77156756]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACU, _67af2fAAACO, _67af2fAAABv], tags={I65}),backend=numpy, dtype=float64, data=array([[[-0.29301534, -0.16665405], [ 0.05232625, 0.78690332]], [[ 0.7520511 , -0.29549456], [ 1.38767439, 0.17473486]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACc, _67af2fAAACb, _67af2fAAACB], tags={I66}),backend=numpy, dtype=float64, data=array([[[ 0.79521858, -0.26720061], [-0.10519735, -0.56633547]], [[-0.09871576, -1.46478992], [ 0.4915131 , 0.50935446]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACd, _67af2fAAACc, _67af2fAAABc], tags={I67}),backend=numpy, dtype=float64, data=array([[[ 0.75544235, 1.77164135], [-1.22289025, 1.49047828]], [[ 0.12158551, -2.31867424], [ 0.01848823, 0.85483554]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACg, _67af2fAAACf, _67af2fAAACe], tags={I68}),backend=numpy, dtype=float64, data=array([[[ 0.7227368 , -0.22551489], [-0.4888639 , 0.70066707]], [[-1.08746465, -0.97158584], [ 0.85236144, 0.96269801]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACh, _67af2fAAACS, _67af2fAAABm], tags={I69}),backend=numpy, dtype=float64, data=array([[[ 0.3296755 , -0.91499217], [ 0.92829453, 1.42199598]], [[-1.99166231, 0.0074796 ], [-0.94931243, -1.12781258]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACV, _67af2fAAACP, _67af2fAAABk], tags={I70}),backend=numpy, dtype=float64, data=array([[[-0.28388004, 2.44399239], [-0.23098441, -1.59771318]], [[-2.56786834, 1.1907446 ], [ 1.49135773, 0.5853944 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACd, _67af2fAAAAs, _67af2fAAAAW], tags={I71}),backend=numpy, dtype=float64, data=array([[[ 0.83576201, 2.15339076], [-0.80696793, 0.24723496]], [[ 0.48071583, 0.22638018], [ 0.99993802, -0.17183673]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACi, _67af2fAAACM, _67af2fAAABB], tags={I72}),backend=numpy, dtype=float64, data=array([[[-1.72953586, 0.8824201 ], [ 0.94282464, -1.49052868]], [[ 0.49363039, 1.02366011], [ 1.3219318 , 0.13656684]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACj, _67af2fAAACf, _67af2fAAABG], tags={I73}),backend=numpy, dtype=float64, data=array([[[ 0.65258668, 0.22272843], [-0.37710862, -1.4005133 ]], [[-0.87819954, 1.06389014], [-2.67796998, 1.16469057]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACk, _67af2fAAACN, _67af2fAAAAt], tags={I74}),backend=numpy, dtype=float64, data=array([[[ 1.10930311, 0.96710875], [-0.18449632, -2.19632474]], [[ 2.32847207, -0.19463999], [ 0.94570235, 0.43298386]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACl, _67af2fAAACX, _67af2fAAAAl], tags={I75}),backend=numpy, dtype=float64, data=array([[[-0.1160343 , 1.01658209], [-1.79924617, 0.6967389 ]], [[ 1.53000807, -0.91475514], [ 1.22643248, 1.18258538]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACm, _67af2fAAAAo, _67af2fAAAAh], tags={I76}),backend=numpy, dtype=float64, data=array([[[ 0.6211931 , -1.46372333], [-0.67258141, 0.69158572]], [[-0.25236025, 1.14300204], [-0.41196788, -0.4113092 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACg, _67af2fAAACL, _67af2fAAABY], tags={I77}),backend=numpy, dtype=float64, data=array([[[-0.27685081, 0.45897355], [-0.10725809, -0.17560083]], [[-0.75294937, -1.1838208 ], [ 0.68087654, -1.09662417]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACk, _67af2fAAABe, _67af2fAAABA], tags={I78}),backend=numpy, dtype=float64, data=array([[[-0.92110031, 2.04748767], [-0.20930679, 0.09577991]], [[-0.40793428, 0.34942966], [-1.05992335, -0.70095478]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACl, _67af2fAAABz, _67af2fAAAAN], tags={I79}),backend=numpy, dtype=float64, data=array([[[-0.42925512, 0.1648491 ], [-0.06745807, -0.49105142]], [[-0.66084483, -0.71542717], [-1.39008283, 1.19607223]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACj, _67af2fAAABo, _67af2fAAAAy], tags={I80}),backend=numpy, dtype=float64, data=array([[[ 2.02950952, -1.22765565], [ 1.24918745, -0.59412117]], [[-1.15263769, -0.8188466 ], [-1.23054708, 1.60579691]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACn, _67af2fAAACh, _67af2fAAAAA], tags={I81}),backend=numpy, dtype=float64, data=array([[[ 1.06905335, 0.45742907], [ 0.93140848, -0.28482561]], [[-1.26345935, -1.16125411], [-1.58408487, -0.8977538 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACo, _67af2fAAACW, _67af2fAAAAE], tags={I82}),backend=numpy, dtype=float64, data=array([[[-0.31384969, -1.14049149], [-1.19614084, 1.25242178]], [[ 0.24187925, 0.02246282], [-0.48479543, 0.01896328]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACr, _67af2fAAACq, _67af2fAAACp], tags={I83}),backend=numpy, dtype=float64, data=array([[[-0.8081965 , -1.6399214 ], [ 0.15667124, -0.26231046]], [[-0.98283266, 1.06778207], [-0.64056352, -1.38303642]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACm, _67af2fAAACR, _67af2fAAAAd], tags={I84}),backend=numpy, dtype=float64, data=array([[[-0.14068929, 0.41546312], [-0.93706581, 0.65419382]], [[-0.80648235, -0.74373297], [ 1.33716318, -1.61944607]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACe, _67af2fAAACQ, _67af2fAAABr], tags={I85}),backend=numpy, dtype=float64, data=array([[[-1.61184631, 1.62894392], [ 0.40530824, 0.71803393]], [[-1.07766989, -0.75596443], [ 0.02612967, 0.8684165 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACs, _67af2fAAABV, _67af2fAAAAR], tags={I86}),backend=numpy, dtype=float64, data=array([[[-1.00723475, -1.04873348], [ 0.84360271, 2.15955728]], [[ 0.49557051, 0.64082816], [-0.88432175, 0.49417542]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACo, _67af2fAAACa, _67af2fAAABO], tags={I87}),backend=numpy, dtype=float64, data=array([[[ 0.17196303, 2.17154549], [-0.95691292, -1.77934294]], [[ 0.1758767 , 0.27865058], [-0.97245654, -0.32586481]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACt, _67af2fAAACn, _67af2fAAABg], tags={I88}),backend=numpy, dtype=float64, data=array([[[ 0.16454028, 1.0543336 ], [-0.14183872, -0.06806919]], [[-1.25950768, -0.66979446], [-0.61563759, 0.32709842]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACK, _67af2fAAAAX, _67af2fAAAAC], tags={I89}),backend=numpy, dtype=float64, data=array([[[ 0.05836501, -0.14479672], [ 0.24275703, -0.66770472]], [[-0.9993962 , 1.10414673], [-2.17430902, -1.47743442]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACT, _67af2fAAABx, _67af2fAAABK], tags={I90}),backend=numpy, dtype=float64, data=array([[[-0.49148804, -0.50568072], [-0.38951434, -0.23834332]], [[-0.83087066, -0.07078329], [-0.66152192, -0.29215283]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABn, _67af2fAAABa, _67af2fAAAAw], tags={I91}),backend=numpy, dtype=float64, data=array([[[-0.14846772, 1.19470227], [-0.44202288, 0.23673868]], [[-0.03347083, -0.32092881], [ 0.42150777, 0.5883475 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACq, _67af2fAAABU, _67af2fAAAAf], tags={I92}),backend=numpy, dtype=float64, data=array([[[-0.86598189, -0.49039864], [ 1.51802956, -0.59100812]], [[ 1.37447989, 0.35109993], [ 0.35337737, 0.96290664]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABs, _67af2fAAABW, _67af2fAAAAz], tags={I93}),backend=numpy, dtype=float64, data=array([[[-0.42153279, -0.06794624], [ 0.13617862, 1.19429339]], [[-0.67560454, 0.87065649], [-0.39001935, 0.63779384]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACt, _67af2fAAAAq, _67af2fAAAAZ], tags={I94}),backend=numpy, dtype=float64, data=array([[[ 1.20231593, -0.26320706], [-1.38778291, -0.22438736]], [[-0.27412015, -0.0637929 ], [ 0.92141209, 0.76731005]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACi, _67af2fAAABi, _67af2fAAABd], tags={I95}),backend=numpy, dtype=float64, data=array([[[ 0.88076909, -0.1284671 ], [-0.9217294 , 0.99217325]], [[-1.02812186, -0.47629981], [ 1.76037976, 0.42173594]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACs, _67af2fAAACr, _67af2fAAACH], tags={I96}),backend=numpy, dtype=float64, data=array([[[-0.8272083 , -0.77226563], [ 1.40038892, -0.45392213]], [[ 0.99849229, -0.15310644], [-1.83419131, -0.15131365]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACb, _67af2fAAABt, _67af2fAAAAS], tags={I97}),backend=numpy, dtype=float64, data=array([[[ 0.67219424, -0.67334425], [ 1.0021386 , -0.55126219]], [[ 0.59799654, 0.36190119], [-1.33290845, -2.0234266 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACp, _67af2fAAACZ, _67af2fAAAAP], tags={I98}),backend=numpy, dtype=float64, data=array([[[-0.96087989, 0.66463361], [-0.29470099, 0.47419775]], [[ 0.28326056, 0.42166369], [-0.45476309, -2.62552245]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACY, _67af2fAAACI, _67af2fAAABq], tags={I99}),backend=numpy, dtype=float64, data=array([[[ 0.53322615, 0.07497022], [-0.12286424, -0.00459035]], [[ 0.74915553, -0.93343275], [-0.5978229 , 0.35686287]]])

...

Although quimb can automatically generate the einsum equation in this case, it might be generally annoying to construct the string. Instead we can directly use the tensor networks index names:

arrays = []
inputs = []
for t in tn:
    arrays.append(t.data)
    inputs.append(t.inds)

In this case they are randomly generated unique indentifiers, but they could be anything hashable, as long as they match the shapes of the arrays and encode the geometry of the contraction.

inputs[:5]
[('_67af2fAAAAC', '_67af2fAAAAB', '_67af2fAAAAA'),
 ('_67af2fAAAAF', '_67af2fAAAAE', '_67af2fAAAAD'),
 ('_67af2fAAAAI', '_67af2fAAAAH', '_67af2fAAAAG'),
 ('_67af2fAAAAL', '_67af2fAAAAK', '_67af2fAAAAJ'),
 ('_67af2fAAAAN', '_67af2fAAAAM', '_67af2fAAAAB')]

Perform the contraction, using the higher quality auto preset:

%%time
ctg.array_contract(arrays, inputs, optimize='auto-hq')
CPU times: user 356 ms, sys: 585 µs, total: 356 ms
Wall time: 322 ms
array(-1.78478002e+15)

Because the contraction expression is cached if we perform the contraction again with the same preset it will be faster:

%%time
ctg.array_contract(arrays, inputs, optimize='auto-hq')
CPU times: user 157 ms, sys: 4.63 ms, total: 162 ms
Wall time: 33.1 ms
array(-1.78478002e+15)

If you want to relate the internal contraction ‘symbols’ (single unicode letters) that appear on the contraction tree to the supplied ‘indices’ (arbitrary hashable objects) then you can call the get_symbol_map function:

symbol_map = ctg.get_symbol_map(inputs)
[symbol_map[ind] for ind in t.inds]
['Ď', 'ý', 'ì']