{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(high-level-interface)=\n", "# High Level\n", "\n", "You can use `cotengra` as a drop-in replacement for `numpy.einsum` or \n", "`opt_einsum.contract`, \n", "and benefit from the improved optimization, contraction routines and features such as slicing, with the following high level interface functions.\n", "\n", "::::{grid} 2\n", ":::{grid-item-card}\n", "Traditional einsum style where the contraction is specified as a compact and more human-readable ``equation`` string:\n", " - [`cotengra.einsum`](cotengra.einsum)\n", " - [`cotengra.einsum_tree`](cotengra.einsum_tree)\n", " - [`cotengra.einsum_expression`](cotengra.einsum_expression)\n", ":::\n", ":::{grid-item-card}\n", "Programmatic style, where the indices can be specified as sequences of arbitrary hashable objects:\n", " - [`cotengra.array_contract`](cotengra.array_contract)\n", " - [`cotengra.array_contract_tree`](cotengra.array_contract_tree)\n", " - [`cotengra.array_contract_expression`](cotengra.array_contract_expression)\n", ":::\n", "::::\n", "\n", "The following are equivalent ways to perform a matrix multiplication and transpose of ``x`` and ``y``:\n", "\n", "```python\n", "import cotengra as ctg\n", "\n", "# einsum style\n", "z = ctg.einsum(\"ab,bc->ca\", x, y)\n", "\n", "# programmatic style\n", "z = ctg.array_contract(\n", " arrays=(x, y), \n", " inputs=[(0, 1), (1, 2)], \n", " output=(2, 0),\n", ")\n", "```\n", "\n", "\n", "The `{_tree}` functions return a [`ContractionTree`](cotengra.ContractionTree) object\n", "which can be used to inspect the order and various properties such as contraction\n", "cost and width. The `{_expression}` functions return a function that performs the\n", "contraction, which can be called with any matching input arrays.\n", "All of these functions take an ``optimize`` kwarg which specifies the \n", "contraction strategy. It can be one of the following:\n", "\n", "- ``str`` : a preset such as ``'auto'``, ``'auto-hq'``, ``'greedy'`` or ``'optimal'``\n", "- ``PathOptimizer`` : a custom optimizer from ``cotengra`` or ``opt_einsum``\n", "- ``ContractionTree`` : a contraction tree generated previously\n", "- ``Sequence[tuple[int]]`` : an explicit path, specified manually or generated previously\n", "\n", "If the method provides sliced indices then the contraction will utilize these to reduce the memory. The default is the `cotengra` preset ``'auto'``. If you explicitly want to use an `opt_einsum` preset then you can supply ``optimize='opt_einsum:auto'`` for example.\n", "\n", "See the docstring of [`array_contract_expression`](cotengra.array_contract_expression)\n", "for other options." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `einsum` interfaces:\n", "\n", "\n", "Here the contraction is specified using the string equation form of \n", "[`numpy.einsum`](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html)\n", "and other `einsum` implementations. For example:\n", "\n", "- ``\"ab,bc->ac\"`` matrix multiplication\n", "- ``\"Xab,Xbc->Xac\"`` batch matrix multiplication\n", "- ``\"ab,ab->ab\"`` hadamard (elementwise) product \n", "\n", "```{hint}\n", "`cotengra` supports all types of explicit equation (such as repeated and hyper/batch indices appearing any number of times).\n", "The `einsum` versions also support automatically expanded dimensions using ellipsis ``...``.\n", "```\n", "\n", "If the right hand side is not specified then the output indices are computed as every index that appears *once* on the left hand side inputs, in sorted order. For example ``'ba'`` is completed as a transposition: ``'ba->ab'``.\n", "\n", "Lets generate a more non-trivial example:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ab,cbd,edf,gfh,ih,ajk,clkm,enmo,gpoq,irq,jst,lutv,nwvx,pyxz,rAz,sB,uBC,wCD,yDE,AE->\n" ] } ], "source": [ "%config InlineBackend.figure_formats = ['svg']\n", "import cotengra as ctg\n", "\n", "# generate the 'inputs and output' format contraction\n", "inputs, output, shapes, size_dict = ctg.utils.lattice_equation([4, 5])\n", "\n", "# generate the 'eq' format\n", "eq = ctg.utils.inputs_output_to_eq(inputs, output)\n", "print(eq)\n", "\n", "# make example arrays\n", "arrays = ctg.utils.make_arrays_from_inputs(inputs, size_dict, seed=42)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And perform the contraction:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(776.48544934)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ctg.einsum(eq, *arrays)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we wanted to inspect the contraction before we perform it we could build the contraction tree first. Here we supply just the shapes of the arrays:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'flops': 964, 'write': 293, 'size': 32}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tree = ctg.einsum_tree(eq, *shapes)\n", "\n", "# some typical quantities of interest:\n", "tree.contract_stats()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The contraction tree has many methods to inspect and manipulate the contraction." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-10-03T15:27:05.879243\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.8.0, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "(
, )" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tree.plot_rubberband()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sometimes it is useful to generate the function that will perform the contraction for any given inputs. This can be done with the [`einsum_expression`](cotengra.einsum_expression) function:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(776.48544934)" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "expr = ctg.einsum_expression(eq, *shapes)\n", "expr(*arrays)\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(2252.66569969)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "arrays = ctg.utils.make_arrays_from_inputs(inputs, size_dict, seed=43)\n", "expr(*arrays)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{note}\n", "[`einsum`](cotengra.einsum) itself caches the expressions it uses by default for presets, so at least in terms of performance savings this is often not necessary.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `array_contract` interfaces\n", "\n", "The einsum format is less convenient for dynamically generated contractions.\n", "The alternate interface `cotengra` provides are the [`array_contract`](cotengra.array_contract) functions. These specify a contraction with the following three arguments:\n", "\n", " - `inputs : Sequence[Sequence[Hashable]]`, the indices of each input\n", " - `output : Sequence[Hashable]`, the output indices\n", " - `size_dict : Mapping[Hashable, int]`, the size of each index\n", "\n", "The indices are mapped ('canonicalized') into single letters in the order they appear on the `inputs`, allowing matching geometries to be cached.\n", "\n", "If `output` is not specified it is calculated as the indices that appear *once* in `inputs`, in the order they appear in `inputs`. Note this is slightly different to `einsum` which sorts the output indices, since we can only require the indices to be hashable.\n", "\n", "```{note}\n", "You can also supply `shapes` to [`array_contract_tree`](cotengra.array_contract_tree) and [`array_contract_expression`](cotengra.array_contract_expression) rather than build ``size_dict`` manually.\n", "```\n", "\n", "As an example, we'll directly contract the arrays from a tensor network generated with [`quimb`](https://quimb.readthedocs.io):" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-10-03T15:27:26.071940\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.8.0, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
TensorNetworkGen(tensors=100, indices=150)
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAC, _67af2fAAAAB, _67af2fAAAAA], tags={I0}),backend=numpy, dtype=float64, data=array([[[ 0.41832997, 0.60557617],\n", " [ 0.02878786, -1.084246 ]],\n", "\n", " [[ 1.46422098, 0.29072736],\n", " [-1.33075642, -0.03472346]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAF, _67af2fAAAAE, _67af2fAAAAD], tags={I1}),backend=numpy, dtype=float64, data=array([[[ 0.28041847, 0.10749307],\n", " [-1.92080086, 1.57864499]],\n", "\n", " [[ 1.00595719, 0.45121505],\n", " [-0.59343367, 0.09382112]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAI, _67af2fAAAAH, _67af2fAAAAG], tags={I2}),backend=numpy, dtype=float64, data=array([[[ 1.85195867, -0.25590475],\n", " [-0.28298637, 0.415816 ]],\n", "\n", " [[-1.08877401, -1.96729165],\n", " [ 0.88737846, -1.32823784]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAL, _67af2fAAAAK, _67af2fAAAAJ], tags={I3}),backend=numpy, dtype=float64, data=array([[[-0.13157981, -0.36196929],\n", " [ 0.7820311 , 0.28266399]],\n", "\n", " [[-1.00595013, 0.01851214],\n", " [-1.24315953, 2.60337585]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAN, _67af2fAAAAM, _67af2fAAAAB], tags={I4}),backend=numpy, dtype=float64, data=array([[[ 0.15139223, -0.51553062],\n", " [-0.2196374 , 0.40234591]],\n", "\n", " [[ 1.36128828, 0.74287737],\n", " [ 0.93685218, 0.17547031]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAQ, _67af2fAAAAP, _67af2fAAAAO], tags={I5}),backend=numpy, dtype=float64, data=array([[[ 1.52520418, 0.09821447],\n", " [-1.16490357, 0.52358791]],\n", "\n", " [[-1.06559789, -0.31079113],\n", " [ 0.5559524 , -0.09963476]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAT, _67af2fAAAAS, _67af2fAAAAR], tags={I6}),backend=numpy, dtype=float64, data=array([[[-0.25769078, -1.58951869],\n", " [-1.81491229, 0.53617305]],\n", "\n", " [[ 1.27138979, -0.55403891],\n", " [ 1.72433064, -0.31178569]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAW, _67af2fAAAAV, _67af2fAAAAU], tags={I7}),backend=numpy, dtype=float64, data=array([[[ 0.06331837, 1.38212765],\n", " [ 0.58472813, -0.50975014]],\n", "\n", " [[ 0.2513335 , 0.40621724],\n", " [ 0.8656376 , -0.53392518]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAZ, _67af2fAAAAY, _67af2fAAAAX], tags={I8}),backend=numpy, dtype=float64, data=array([[[-0.03877829, 1.14263416],\n", " [-0.46350628, 2.26692259]],\n", "\n", " [[-0.5287392 , 0.32461586],\n", " [-0.1544165 , -0.81960771]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAc, _67af2fAAAAb, _67af2fAAAAa], tags={I9}),backend=numpy, dtype=float64, data=array([[[-1.20293573, 0.09544837],\n", " [-1.3617434 , 0.27737016]],\n", "\n", " [[ 0.30665917, -1.40419209],\n", " [-1.53897176, 1.59692719]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAf, _67af2fAAAAe, _67af2fAAAAd], tags={I10}),backend=numpy, dtype=float64, data=array([[[ 1.26835706, -0.74444453],\n", " [-1.37903328, -0.37289418]],\n", "\n", " [[ 0.22521904, -0.7968999 ],\n", " [-0.19003288, 0.40520818]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAh, _67af2fAAAAg, _67af2fAAAAO], tags={I11}),backend=numpy, dtype=float64, data=array([[[-1.56670601, 1.62226236],\n", " [-0.55864531, 1.285027 ]],\n", "\n", " [[-0.64887992, 0.64242269],\n", " [ 2.17043334, -0.54919245]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAk, _67af2fAAAAj, _67af2fAAAAi], tags={I12}),backend=numpy, dtype=float64, data=array([[[ 0.03192423, -0.74439149],\n", " [ 1.30574661, 0.87000251]],\n", "\n", " [[ 0.79849565, 0.83417813],\n", " [-0.37099355, -0.67691076]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAn, _67af2fAAAAm, _67af2fAAAAl], tags={I13}),backend=numpy, dtype=float64, data=array([[[-0.51472382, -1.67062395],\n", " [ 1.15948922, 0.57182887]],\n", "\n", " [[-0.74479429, -0.29743726],\n", " [-1.31841887, -0.25337446]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAq, _67af2fAAAAp, _67af2fAAAAo], tags={I14}),backend=numpy, dtype=float64, data=array([[[-1.33318094, 1.5421289 ],\n", " [-0.30571997, -1.00505008]],\n", "\n", " [[ 1.12605809, -0.13951594],\n", " [ 0.20790904, -0.52911824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAs, _67af2fAAAAr, _67af2fAAAAk], tags={I15}),backend=numpy, dtype=float64, data=array([[[ 0.3674769 , 1.55241718],\n", " [ 0.82071427, 0.14560001]],\n", "\n", " [[ 0.19105139, 1.47534647],\n", " [ 0.11034297, -1.12803854]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAv, _67af2fAAAAu, _67af2fAAAAt], tags={I16}),backend=numpy, dtype=float64, data=array([[[ 0.55952408, 0.64356438],\n", " [-0.95897595, 0.25066991]],\n", "\n", " [[-0.01199984, 0.40876493],\n", " [-0.28508488, -0.42547401]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAAAx, _67af2fAAAAw, _67af2fAAAAF], tags={I17}),backend=numpy, dtype=float64, data=array([[[ 0.64288936, 0.15768144],\n", " [-1.26677523, 0.25459598]],\n", "\n", " [[ 0.74840043, 0.82142352],\n", " [-0.27199821, 0.87673783]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABA, _67af2fAAAAz, _67af2fAAAAy], tags={I18}),backend=numpy, dtype=float64, data=array([[[-0.96158539, -1.96465654],\n", " [ 0.3811346 , -0.52300382]],\n", "\n", " [[ 1.83535089, -1.43337649],\n", " [-0.59776266, -0.40870927]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABD, _67af2fAAABC, _67af2fAAABB], tags={I19}),backend=numpy, dtype=float64, data=array([[[ 0.68397816, -0.42211949],\n", " [-0.1650454 , 0.50112425]],\n", "\n", " [[-0.84532849, 0.5558993 ],\n", " [ 2.06932322, 0.1431952 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABE, _67af2fAAAAx, _67af2fAAAAH], tags={I20}),backend=numpy, dtype=float64, data=array([[[ 0.66338863, 1.06152374],\n", " [ 0.20970869, -0.1046703 ]],\n", "\n", " [[-0.11430137, 0.31375736],\n", " [-1.29794829, 0.71623433]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABF, _67af2fAAAAp, _67af2fAAAAU], tags={I21}),backend=numpy, dtype=float64, data=array([[[ 0.45682495, -0.06321074],\n", " [ 0.05871331, -0.21005493]],\n", "\n", " [[-0.48018253, -1.02270796],\n", " [-4.09288747, -1.06758198]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABH, _67af2fAAABG, _67af2fAAAAJ], tags={I22}),backend=numpy, dtype=float64, data=array([[[ 1.3837506 , 0.54885024],\n", " [-0.44582365, 0.27466897]],\n", "\n", " [[ 0.33042849, 0.20524129],\n", " [ 1.16451197, -0.47649984]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABJ, _67af2fAAABI, _67af2fAAAAe], tags={I23}),backend=numpy, dtype=float64, data=array([[[-1.2017799 , 0.05584251],\n", " [-0.98095115, -0.20473152]],\n", "\n", " [[ 0.2602656 , 0.05370239],\n", " [-1.63822223, 0.02741674]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABK, _67af2fAAAAg, _67af2fAAAAM], tags={I24}),backend=numpy, dtype=float64, data=array([[[ 1.77715336, -0.28063285],\n", " [ 0.8070599 , 0.04899588]],\n", "\n", " [[ 0.71085377, 1.32883486],\n", " [ 0.27595826, 2.20806878]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABM, _67af2fAAABL, _67af2fAAAAV], tags={I25}),backend=numpy, dtype=float64, data=array([[[-0.51588489, 0.81952451],\n", " [-0.83499587, -0.04589686]],\n", "\n", " [[-0.56797951, -0.62835871],\n", " [ 0.07071926, -0.88126988]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABP, _67af2fAAABO, _67af2fAAABN], tags={I26}),backend=numpy, dtype=float64, data=array([[[ 2.52843728, 0.42337001],\n", " [-0.84121891, -0.68472719]],\n", "\n", " [[ 1.20186868, -0.71322565],\n", " [-0.94592688, 0.34291128]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABS, _67af2fAAABR, _67af2fAAABQ], tags={I27}),backend=numpy, dtype=float64, data=array([[[-0.2171143 , 0.6168768 ],\n", " [-1.33975764, -0.11793883]],\n", "\n", " [[-0.01325147, -1.58992986],\n", " [ 1.97251808, 1.97324821]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABU, _67af2fAAABT, _67af2fAAABF], tags={I28}),backend=numpy, dtype=float64, data=array([[[ 0.79460659, -0.95131942],\n", " [-0.62124143, -0.4642856 ]],\n", "\n", " [[ 0.15497854, -1.07938866],\n", " [ 0.33352787, 1.24106592]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABW, _67af2fAAABV, _67af2fAAABJ], tags={I29}),backend=numpy, dtype=float64, data=array([[[ 0.74599076, -0.26003485],\n", " [ 2.1604772 , -0.49252775]],\n", "\n", " [[ 0.13855393, 0.04263864],\n", " [-0.38309191, 1.79375069]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABZ, _67af2fAAABY, _67af2fAAABX], tags={I30}),backend=numpy, dtype=float64, data=array([[[ 1.88481868, -0.0695583 ],\n", " [-0.60641381, 0.90302332]],\n", "\n", " [[ 0.41724814, 0.66447103],\n", " [-0.16009207, -1.67700053]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABc, _67af2fAAABb, _67af2fAAABa], tags={I31}),backend=numpy, dtype=float64, data=array([[[ 0.38689291, -1.0936539 ],\n", " [-0.13263404, 1.46114905]],\n", "\n", " [[ 0.12281697, 1.31833095],\n", " [ 1.83668743, -0.14130299]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABe, _67af2fAAABd, _67af2fAAAAD], tags={I32}),backend=numpy, dtype=float64, data=array([[[-0.31227164, 0.77419936],\n", " [ 0.13490232, 0.3719042 ]],\n", "\n", " [[-0.77619116, 1.05731633],\n", " [ 0.14662734, 0.88291095]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABh, _67af2fAAABg, _67af2fAAABf], tags={I33}),backend=numpy, dtype=float64, data=array([[[-0.67215056, 0.05571073],\n", " [-1.03576912, -1.28441133]],\n", "\n", " [[-1.33272379, 0.74270892],\n", " [-0.81299007, -0.27344781]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABj, _67af2fAAABi, _67af2fAAABf], tags={I34}),backend=numpy, dtype=float64, data=array([[[-0.8951296 , 1.26369747],\n", " [ 2.24931229, -0.82915018]],\n", "\n", " [[ 1.78783198, 0.02225253],\n", " [ 1.71827061, -0.72247824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABm, _67af2fAAABl, _67af2fAAABk], tags={I35}),backend=numpy, dtype=float64, data=array([[[-1.41833885, -0.17283426],\n", " [ 0.23216175, 0.63922096]],\n", "\n", " [[-0.53334221, 0.56416889],\n", " [ 0.64977318, -0.69770602]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABo, _67af2fAAABn, _67af2fAAABl], tags={I36}),backend=numpy, dtype=float64, data=array([[[ 1.34170008, -0.02143791],\n", " [ 1.16329068, -1.45774186]],\n", "\n", " [[ 0.77342921, 0.45629851],\n", " [-0.57959124, 1.35134186]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABp, _67af2fAAAAQ, _67af2fAAAAL], tags={I37}),backend=numpy, dtype=float64, data=array([[[-0.33926804, -0.29612723],\n", " [-0.03585335, -1.89036142]],\n", "\n", " [[-1.18933749, 0.21242447],\n", " [ 0.02652267, -0.27630153]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABs, _67af2fAAABr, _67af2fAAABq], tags={I38}),backend=numpy, dtype=float64, data=array([[[ 1.54389242, -0.46439793],\n", " [-0.03863546, -2.16611661]],\n", "\n", " [[-1.64557186, -1.37941789],\n", " [-0.02262371, 1.24117565]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABu, _67af2fAAABt, _67af2fAAABL], tags={I39}),backend=numpy, dtype=float64, data=array([[[-0.38368313, 0.62433256],\n", " [ 0.69722479, 1.0513984 ]],\n", "\n", " [[-0.73418882, -0.23299285],\n", " [-0.75439293, 0.17953454]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABw, _67af2fAAABv, _67af2fAAAAK], tags={I40}),backend=numpy, dtype=float64, data=array([[[ 0.05877584, -1.41417106],\n", " [-0.11633639, -0.26464487]],\n", "\n", " [[-2.84431328, 1.90038606],\n", " [ 0.25512732, -1.40045801]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABx, _67af2fAAABw, _67af2fAAAAb], tags={I41}),backend=numpy, dtype=float64, data=array([[[ 1.64707772, -0.81599775],\n", " [ 1.3777959 , 0.84016492]],\n", "\n", " [[-0.53326295, 0.37130353],\n", " [ 0.71351521, -0.18891297]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABy, _67af2fAAABZ, _67af2fAAABE], tags={I42}),backend=numpy, dtype=float64, data=array([[[-1.52825934, -0.48924217],\n", " [-1.53611577, -0.95677416]],\n", "\n", " [[-1.21581794, -0.40168864],\n", " [-0.03305816, -0.04687855]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACB, _67af2fAAACA, _67af2fAAABz], tags={I43}),backend=numpy, dtype=float64, data=array([[[ 0.78738459, -0.58063465],\n", " [-0.02645525, -0.76479373]],\n", "\n", " [[-0.71402249, 0.33955049],\n", " [-1.33190651, 0.044711 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACD, _67af2fAAACC, _67af2fAAABh], tags={I44}),backend=numpy, dtype=float64, data=array([[[-0.02008487, 0.98077792],\n", " [ 1.8685142 , 0.14106384]],\n", "\n", " [[-0.54273423, 1.75408101],\n", " [ 2.09132258, -0.01678777]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACC, _67af2fAAACA, _67af2fAAAAc], tags={I45}),backend=numpy, dtype=float64, data=array([[[ 0.03333921, 1.61852048],\n", " [-0.4632809 , 0.25798995]],\n", "\n", " [[ 0.82237162, 0.31547219],\n", " [-1.56309635, -0.71530198]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABu, _67af2fAAABQ, _67af2fAAABH], tags={I46}),backend=numpy, dtype=float64, data=array([[[ 0.12616229, 0.24306265],\n", " [-1.21393031, -0.03326173]],\n", "\n", " [[-1.56211681, 0.61171481],\n", " [-0.10485063, 1.25027403]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACE, _67af2fAAACD, _67af2fAAAAn], tags={I47}),backend=numpy, dtype=float64, data=array([[[ 0.82565929, 0.03612089],\n", " [-0.3786331 , -2.45768323]],\n", "\n", " [[ 0.5555774 , 0.05666101],\n", " [-0.96264526, -1.49662903]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACG, _67af2fAAACF, _67af2fAAAAa], tags={I48}),backend=numpy, dtype=float64, data=array([[[ 0.86181319, -0.39336792],\n", " [-0.21088054, -1.52093029]],\n", "\n", " [[-0.68659009, -0.04785194],\n", " [-0.04965364, -0.08275545]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACH, _67af2fAAABC, _67af2fAAAAr], tags={I49}),backend=numpy, dtype=float64, data=array([[[-1.11921376, 1.35283861],\n", " [ 1.16737392, -0.1649528 ]],\n", "\n", " [[ 0.41631827, -0.47844357],\n", " [ 0.25467172, -0.20455254]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACJ, _67af2fAAACI, _67af2fAAAAI], tags={I50}),backend=numpy, dtype=float64, data=array([[[ 0.57486997, 1.05275919],\n", " [ 0.97019501, 0.1345963 ]],\n", "\n", " [[-0.99522698, 0.37310525],\n", " [-0.15596329, 0.92255026]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACK, _67af2fAAABM, _67af2fAAAAi], tags={I51}),backend=numpy, dtype=float64, data=array([[[0.39814605, 1.30281331],\n", " [1.41125229, 0.76097404]],\n", "\n", " [[0.51736235, 0.34134732],\n", " [1.68736855, 0.07948978]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACM, _67af2fAAACL, _67af2fAAABD], tags={I52}),backend=numpy, dtype=float64, data=array([[[ 1.48196655, 0.41736173],\n", " [-2.08082564, -0.99010177]],\n", "\n", " [[ 0.49097649, 1.19824281],\n", " [-0.83588013, -0.111188 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACO, _67af2fAAACN, _67af2fAAABT], tags={I53}),backend=numpy, dtype=float64, data=array([[[ 0.93669257, 1.00292715],\n", " [-1.1592509 , 0.17165121]],\n", "\n", " [[-0.95672409, 0.44413637],\n", " [-0.24374346, 1.45953642]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACE, _67af2fAAAAj, _67af2fAAAAY], tags={I54}),backend=numpy, dtype=float64, data=array([[[ 0.62266766, 0.34755705],\n", " [-0.48303148, -1.09935182]],\n", "\n", " [[ 0.41925794, 1.06491403],\n", " [-0.41431341, -1.57496432]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACP, _67af2fAAABR, _67af2fAAABI], tags={I55}),backend=numpy, dtype=float64, data=array([[[-1.51382364, 0.36708892],\n", " [-1.62472029, 1.67648859]],\n", "\n", " [[ 0.98760363, -0.78162987],\n", " [-0.63497334, -0.62930064]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACR, _67af2fAAACQ, _67af2fAAAAv], tags={I56}),backend=numpy, dtype=float64, data=array([[[-0.42659068, 1.32173848],\n", " [-0.03978853, -0.10435659]],\n", "\n", " [[ 0.03709526, -0.6325428 ],\n", " [-1.27154466, 0.32421925]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACS, _67af2fAAACF, _67af2fAAABj], tags={I57}),backend=numpy, dtype=float64, data=array([[[ 0.27335833, 1.19676365],\n", " [ 0.6535591 , 0.08589129]],\n", "\n", " [[ 0.47844404, 0.221359 ],\n", " [-0.19211651, -0.23340824]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACT, _67af2fAAACJ, _67af2fAAABy], tags={I58}),backend=numpy, dtype=float64, data=array([[[-0.4957821 , 0.64799884],\n", " [ 0.34294736, -0.46004224]],\n", "\n", " [[-0.09062532, -0.26219847],\n", " [-1.07842502, -1.74451027]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABP, _67af2fAAAAu, _67af2fAAAAm], tags={I59}),backend=numpy, dtype=float64, data=array([[[-0.77202293, -1.27249467],\n", " [ 0.84691268, -1.72488911]],\n", "\n", " [[ 0.99898935, 0.17427921],\n", " [-0.71807075, 0.22366337]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACV, _67af2fAAACU, _67af2fAAABb], tags={I60}),backend=numpy, dtype=float64, data=array([[[-0.00701465, -1.35254573],\n", " [ 0.55302397, -1.00111538]],\n", "\n", " [[ 0.4508672 , -0.6071619 ],\n", " [-1.72811239, 0.38531025]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACX, _67af2fAAACW, _67af2fAAAAT], tags={I61}),backend=numpy, dtype=float64, data=array([[[-1.0977206 , -1.21844911],\n", " [-1.0680861 , -0.81158037]],\n", "\n", " [[ 0.88338946, -0.61570669],\n", " [-0.07811817, 0.42106163]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABX, _67af2fAAABS, _67af2fAAABN], tags={I62}),backend=numpy, dtype=float64, data=array([[[-1.15124221, -0.64150838],\n", " [ 0.95555201, 0.99141828]],\n", "\n", " [[-1.18426722, 0.78212112],\n", " [ 1.3265312 , -0.02626513]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACG, _67af2fAAABp, _67af2fAAAAG], tags={I63}),backend=numpy, dtype=float64, data=array([[[-1.54744898, 0.25224055],\n", " [-1.62104324, -0.13977993]],\n", "\n", " [[-0.03909069, 1.58248776],\n", " [ 0.55074243, 0.60963354]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACa, _67af2fAAACZ, _67af2fAAACY], tags={I64}),backend=numpy, dtype=float64, data=array([[[-0.49185897, 0.16316046],\n", " [ 0.05892211, -0.63315672]],\n", "\n", " [[ 1.30455739, 0.02073824],\n", " [ 0.32342008, -0.77156756]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACU, _67af2fAAACO, _67af2fAAABv], tags={I65}),backend=numpy, dtype=float64, data=array([[[-0.29301534, -0.16665405],\n", " [ 0.05232625, 0.78690332]],\n", "\n", " [[ 0.7520511 , -0.29549456],\n", " [ 1.38767439, 0.17473486]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACc, _67af2fAAACb, _67af2fAAACB], tags={I66}),backend=numpy, dtype=float64, data=array([[[ 0.79521858, -0.26720061],\n", " [-0.10519735, -0.56633547]],\n", "\n", " [[-0.09871576, -1.46478992],\n", " [ 0.4915131 , 0.50935446]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACd, _67af2fAAACc, _67af2fAAABc], tags={I67}),backend=numpy, dtype=float64, data=array([[[ 0.75544235, 1.77164135],\n", " [-1.22289025, 1.49047828]],\n", "\n", " [[ 0.12158551, -2.31867424],\n", " [ 0.01848823, 0.85483554]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACg, _67af2fAAACf, _67af2fAAACe], tags={I68}),backend=numpy, dtype=float64, data=array([[[ 0.7227368 , -0.22551489],\n", " [-0.4888639 , 0.70066707]],\n", "\n", " [[-1.08746465, -0.97158584],\n", " [ 0.85236144, 0.96269801]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACh, _67af2fAAACS, _67af2fAAABm], tags={I69}),backend=numpy, dtype=float64, data=array([[[ 0.3296755 , -0.91499217],\n", " [ 0.92829453, 1.42199598]],\n", "\n", " [[-1.99166231, 0.0074796 ],\n", " [-0.94931243, -1.12781258]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACV, _67af2fAAACP, _67af2fAAABk], tags={I70}),backend=numpy, dtype=float64, data=array([[[-0.28388004, 2.44399239],\n", " [-0.23098441, -1.59771318]],\n", "\n", " [[-2.56786834, 1.1907446 ],\n", " [ 1.49135773, 0.5853944 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACd, _67af2fAAAAs, _67af2fAAAAW], tags={I71}),backend=numpy, dtype=float64, data=array([[[ 0.83576201, 2.15339076],\n", " [-0.80696793, 0.24723496]],\n", "\n", " [[ 0.48071583, 0.22638018],\n", " [ 0.99993802, -0.17183673]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACi, _67af2fAAACM, _67af2fAAABB], tags={I72}),backend=numpy, dtype=float64, data=array([[[-1.72953586, 0.8824201 ],\n", " [ 0.94282464, -1.49052868]],\n", "\n", " [[ 0.49363039, 1.02366011],\n", " [ 1.3219318 , 0.13656684]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACj, _67af2fAAACf, _67af2fAAABG], tags={I73}),backend=numpy, dtype=float64, data=array([[[ 0.65258668, 0.22272843],\n", " [-0.37710862, -1.4005133 ]],\n", "\n", " [[-0.87819954, 1.06389014],\n", " [-2.67796998, 1.16469057]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACk, _67af2fAAACN, _67af2fAAAAt], tags={I74}),backend=numpy, dtype=float64, data=array([[[ 1.10930311, 0.96710875],\n", " [-0.18449632, -2.19632474]],\n", "\n", " [[ 2.32847207, -0.19463999],\n", " [ 0.94570235, 0.43298386]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACl, _67af2fAAACX, _67af2fAAAAl], tags={I75}),backend=numpy, dtype=float64, data=array([[[-0.1160343 , 1.01658209],\n", " [-1.79924617, 0.6967389 ]],\n", "\n", " [[ 1.53000807, -0.91475514],\n", " [ 1.22643248, 1.18258538]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACm, _67af2fAAAAo, _67af2fAAAAh], tags={I76}),backend=numpy, dtype=float64, data=array([[[ 0.6211931 , -1.46372333],\n", " [-0.67258141, 0.69158572]],\n", "\n", " [[-0.25236025, 1.14300204],\n", " [-0.41196788, -0.4113092 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACg, _67af2fAAACL, _67af2fAAABY], tags={I77}),backend=numpy, dtype=float64, data=array([[[-0.27685081, 0.45897355],\n", " [-0.10725809, -0.17560083]],\n", "\n", " [[-0.75294937, -1.1838208 ],\n", " [ 0.68087654, -1.09662417]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACk, _67af2fAAABe, _67af2fAAABA], tags={I78}),backend=numpy, dtype=float64, data=array([[[-0.92110031, 2.04748767],\n", " [-0.20930679, 0.09577991]],\n", "\n", " [[-0.40793428, 0.34942966],\n", " [-1.05992335, -0.70095478]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACl, _67af2fAAABz, _67af2fAAAAN], tags={I79}),backend=numpy, dtype=float64, data=array([[[-0.42925512, 0.1648491 ],\n", " [-0.06745807, -0.49105142]],\n", "\n", " [[-0.66084483, -0.71542717],\n", " [-1.39008283, 1.19607223]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACj, _67af2fAAABo, _67af2fAAAAy], tags={I80}),backend=numpy, dtype=float64, data=array([[[ 2.02950952, -1.22765565],\n", " [ 1.24918745, -0.59412117]],\n", "\n", " [[-1.15263769, -0.8188466 ],\n", " [-1.23054708, 1.60579691]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACn, _67af2fAAACh, _67af2fAAAAA], tags={I81}),backend=numpy, dtype=float64, data=array([[[ 1.06905335, 0.45742907],\n", " [ 0.93140848, -0.28482561]],\n", "\n", " [[-1.26345935, -1.16125411],\n", " [-1.58408487, -0.8977538 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACo, _67af2fAAACW, _67af2fAAAAE], tags={I82}),backend=numpy, dtype=float64, data=array([[[-0.31384969, -1.14049149],\n", " [-1.19614084, 1.25242178]],\n", "\n", " [[ 0.24187925, 0.02246282],\n", " [-0.48479543, 0.01896328]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACr, _67af2fAAACq, _67af2fAAACp], tags={I83}),backend=numpy, dtype=float64, data=array([[[-0.8081965 , -1.6399214 ],\n", " [ 0.15667124, -0.26231046]],\n", "\n", " [[-0.98283266, 1.06778207],\n", " [-0.64056352, -1.38303642]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACm, _67af2fAAACR, _67af2fAAAAd], tags={I84}),backend=numpy, dtype=float64, data=array([[[-0.14068929, 0.41546312],\n", " [-0.93706581, 0.65419382]],\n", "\n", " [[-0.80648235, -0.74373297],\n", " [ 1.33716318, -1.61944607]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACe, _67af2fAAACQ, _67af2fAAABr], tags={I85}),backend=numpy, dtype=float64, data=array([[[-1.61184631, 1.62894392],\n", " [ 0.40530824, 0.71803393]],\n", "\n", " [[-1.07766989, -0.75596443],\n", " [ 0.02612967, 0.8684165 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACs, _67af2fAAABV, _67af2fAAAAR], tags={I86}),backend=numpy, dtype=float64, data=array([[[-1.00723475, -1.04873348],\n", " [ 0.84360271, 2.15955728]],\n", "\n", " [[ 0.49557051, 0.64082816],\n", " [-0.88432175, 0.49417542]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACo, _67af2fAAACa, _67af2fAAABO], tags={I87}),backend=numpy, dtype=float64, data=array([[[ 0.17196303, 2.17154549],\n", " [-0.95691292, -1.77934294]],\n", "\n", " [[ 0.1758767 , 0.27865058],\n", " [-0.97245654, -0.32586481]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACt, _67af2fAAACn, _67af2fAAABg], tags={I88}),backend=numpy, dtype=float64, data=array([[[ 0.16454028, 1.0543336 ],\n", " [-0.14183872, -0.06806919]],\n", "\n", " [[-1.25950768, -0.66979446],\n", " [-0.61563759, 0.32709842]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACK, _67af2fAAAAX, _67af2fAAAAC], tags={I89}),backend=numpy, dtype=float64, data=array([[[ 0.05836501, -0.14479672],\n", " [ 0.24275703, -0.66770472]],\n", "\n", " [[-0.9993962 , 1.10414673],\n", " [-2.17430902, -1.47743442]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACT, _67af2fAAABx, _67af2fAAABK], tags={I90}),backend=numpy, dtype=float64, data=array([[[-0.49148804, -0.50568072],\n", " [-0.38951434, -0.23834332]],\n", "\n", " [[-0.83087066, -0.07078329],\n", " [-0.66152192, -0.29215283]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABn, _67af2fAAABa, _67af2fAAAAw], tags={I91}),backend=numpy, dtype=float64, data=array([[[-0.14846772, 1.19470227],\n", " [-0.44202288, 0.23673868]],\n", "\n", " [[-0.03347083, -0.32092881],\n", " [ 0.42150777, 0.5883475 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACq, _67af2fAAABU, _67af2fAAAAf], tags={I92}),backend=numpy, dtype=float64, data=array([[[-0.86598189, -0.49039864],\n", " [ 1.51802956, -0.59100812]],\n", "\n", " [[ 1.37447989, 0.35109993],\n", " [ 0.35337737, 0.96290664]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAABs, _67af2fAAABW, _67af2fAAAAz], tags={I93}),backend=numpy, dtype=float64, data=array([[[-0.42153279, -0.06794624],\n", " [ 0.13617862, 1.19429339]],\n", "\n", " [[-0.67560454, 0.87065649],\n", " [-0.39001935, 0.63779384]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACt, _67af2fAAAAq, _67af2fAAAAZ], tags={I94}),backend=numpy, dtype=float64, data=array([[[ 1.20231593, -0.26320706],\n", " [-1.38778291, -0.22438736]],\n", "\n", " [[-0.27412015, -0.0637929 ],\n", " [ 0.92141209, 0.76731005]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACi, _67af2fAAABi, _67af2fAAABd], tags={I95}),backend=numpy, dtype=float64, data=array([[[ 0.88076909, -0.1284671 ],\n", " [-0.9217294 , 0.99217325]],\n", "\n", " [[-1.02812186, -0.47629981],\n", " [ 1.76037976, 0.42173594]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACs, _67af2fAAACr, _67af2fAAACH], tags={I96}),backend=numpy, dtype=float64, data=array([[[-0.8272083 , -0.77226563],\n", " [ 1.40038892, -0.45392213]],\n", "\n", " [[ 0.99849229, -0.15310644],\n", " [-1.83419131, -0.15131365]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACb, _67af2fAAABt, _67af2fAAAAS], tags={I97}),backend=numpy, dtype=float64, data=array([[[ 0.67219424, -0.67334425],\n", " [ 1.0021386 , -0.55126219]],\n", "\n", " [[ 0.59799654, 0.36190119],\n", " [-1.33290845, -2.0234266 ]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACp, _67af2fAAACZ, _67af2fAAAAP], tags={I98}),backend=numpy, dtype=float64, data=array([[[-0.96087989, 0.66463361],\n", " [-0.29470099, 0.47419775]],\n", "\n", " [[ 0.28326056, 0.42166369],\n", " [-0.45476309, -2.62552245]]])
Tensor(shape=(2, 2, 2), inds=[_67af2fAAACY, _67af2fAAACI, _67af2fAAABq], tags={I99}),backend=numpy, dtype=float64, data=array([[[ 0.53322615, 0.07497022],\n", " [-0.12286424, -0.00459035]],\n", "\n", " [[ 0.74915553, -0.93343275],\n", " [-0.5978229 , 0.35686287]]])

...

" ], "text/plain": [ "TensorNetworkGen(tensors=100, indices=150)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import quimb.tensor as qtn\n", "\n", "tn = qtn.TN_rand_reg(100, 3, D=2, seed=42)\n", "tn.draw(edge_color=True)\n", "tn\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although `quimb` can automatically generate the einsum equation in this case,\n", "it might be generally annoying to construct the string. Instead we can directly\n", "use the tensor networks index names:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "arrays = []\n", "inputs = []\n", "for t in tn:\n", " arrays.append(t.data)\n", " inputs.append(t.inds)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case they are randomly generated unique indentifiers, but they could\n", "be anything hashable, as long as they match the shapes of the arrays and encode the geometry of the contraction." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('_67af2fAAAAC', '_67af2fAAAAB', '_67af2fAAAAA'),\n", " ('_67af2fAAAAF', '_67af2fAAAAE', '_67af2fAAAAD'),\n", " ('_67af2fAAAAI', '_67af2fAAAAH', '_67af2fAAAAG'),\n", " ('_67af2fAAAAL', '_67af2fAAAAK', '_67af2fAAAAJ'),\n", " ('_67af2fAAAAN', '_67af2fAAAAM', '_67af2fAAAAB')]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "inputs[:5]\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perform the contraction, using the higher quality auto preset:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 356 ms, sys: 585 µs, total: 356 ms\n", "Wall time: 322 ms\n" ] }, { "data": { "text/plain": [ "array(-1.78478002e+15)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "ctg.array_contract(arrays, inputs, optimize='auto-hq')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because the contraction expression is cached if we perform the contraction \n", "again with the same preset it will be faster:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 157 ms, sys: 4.63 ms, total: 162 ms\n", "Wall time: 33.1 ms\n" ] }, { "data": { "text/plain": [ "array(-1.78478002e+15)" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "ctg.array_contract(arrays, inputs, optimize='auto-hq')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to relate the internal contraction 'symbols' (single unicode \n", "letters) that appear on the contraction tree to the supplied 'indices' \n", "(arbitrary hashable objects) then you can call the \n", "[`get_symbol_map`](cotengra.get_symbol_map) function:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['Ď', 'ý', 'ì']" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "symbol_map = ctg.get_symbol_map(inputs)\n", "[symbol_map[ind] for ind in t.inds]\n" ] } ], "metadata": { "kernelspec": { "display_name": "py311", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }