Installation

cotengra is available on both pypi and conda-forge. While cotengra is itself pure python, the recommended distribution would be mambaforge for installing the various optional dependencies.

Installing with pip:

pip install cotengra

Installing with conda:

conda install -c conda-forge cotengra

Installing with mambaforge:

mamba install cotengra

Hint

Mamba is a faster version of conda, and the -forge distritbution comes pre-configured with only the conda-forge channel, which further simplifies and speeds up installing dependencies.

Installing the latest version directly from github:

If you want to checkout the latest version of features and fixes, you can install directly from the github repository:

pip install -U git+https://github.com/jcmgray/cotengra.git

Installing a local, editable development version:

If you want to make changes to the source code and test them out, you can install a local editable version of the package:

git clone https://github.com/jcmgray/cotengra.git
pip install --no-deps -U -e cotengra/

Other than that, the optional dependencies are detailed below.

Hint

The recommended selection of optional dependencies from below covering most use-cases is:

autoray cotengrust cytoolz kahypar loky networkx opt_einsum optuna tqdm

Contraction

If you want to perform the contractions using cotengra itself you’ll need:

which supports at least numpy, cupy, torch, tensorflow, jax, and autograd among others.

Optimization

  • kahypar - Karlsruhe Hypergraph Partitioning for high quality divisive tree building (available via pip, unfortunately not yet via conda or for windows)

  • cotengrust - rust accelerated pathfinding primitives

  • tqdm - for showing live progress (available via pip or conda)

  • cytoolz - a couple of slightly faster utility functions

To perform the hyper-optimization (and not just randomly sample) one of the following libraries is needed:

  • optuna - Tree of Parzen Estimators used by default

  • baytune - Bayesian Tuning and Bandits - Gaussian Processes used by default

  • chocolate - the CMAES optimization algorithm is used by default (sampler='QuasiRandom' also useful)

  • skopt - random forest as well as Gaussian process regressors (high quality but slow)

  • nevergrad - various population and evolutionary algorithms (v fast & suitable for highly parallel path findings)

If you want to experiment with other algorithms then the following can be used:

  • python-igraph - various other community detection algorithms (though no hyperedge support and usually worse performance than kahypar).

  • QuickBB

  • FlowCutter

The latter two are both accessed simply using their command line interface and so the following executables should be placed on the path somewhere: [quickbb_64, flow_cutter_pace17].

Parallelization

The parallel functionality can requires any of the following:

Visualization

The following packages enable visualization: