Installation#

Basic requirements are opt_einsum and either cytoolz or toolz. To install this package from source, you can clone it locally, navigate into the source directory and then call:

pip install -U .

or should you want to edit the source:

pip install --no-deps -U -e .

To install it directly from github (e.g. in a colab notebook):

pip install -U git+https://github.com/jcmgray/cotengra.git

Other than that, the optional dependencies are detailed below.

Hint

The recommended selection of optional dependencies from below covering most use-cases is:

kahypar tqdm optuna loky networkx autoray

Optimization#

  • kahypar - Karlsruhe Hypergraph Partitioning for high quality divisive tree building (available via pip, unfortunately not yet via conda or for windows)

  • tqdm - for showing live progress (available via pip or conda)

To perform the hyper-optimization (and not just randomly sample) one of the following libraries is needed:

  • optuna - Tree of Parzen Estimators used by default

  • baytune - Bayesian Tuning and Bandits - Gaussian Processes used by default

  • chocolate - the CMAES optimization algorithm is used by default (sampler='QuasiRandom' also useful)

  • skopt - random forest as well as Gaussian process regressors (high quality but slow)

  • nevergrad - various population and evolutionary algorithms (v fast & suitable for highly parallel path findings)

If you want to experiment with other algorithms then the following can be used:

  • python-igraph - various other community detection algorithms (though no hyperedge support and usually worse performance than kahypar).

  • QuickBB

  • FlowCutter

The latter two are both accessed simply using their command line interface and so the following executables should be placed on the path somewhere: [quickbb_64, flow_cutter_pace17].

Parallelization#

The parallel functionality can requires any of the following:

Visualization#

The following packages enable visualization:

Contraction#

If you want to perform the contractions using cotengra itself you’ll need:

which supports at least numpy, cupy, torch, tensorflow, jax, and autograd among others.