cotengra.parallel¶
Interface for parallelism.
Attributes¶
Classes¶
Basic |
|
Basic |
Functions¶
|
Get a parallel pool. |
|
|
|
Return the backend type of |
|
Extract how many workers our pool has (mostly for working out how many |
|
|
|
Create a parallel pool of type |
|
Logic required for nested parallelism in dask.distributed. |
|
Logic required for nested parallelism in dask.distributed. |
|
Interface for submitting |
|
Interface for maybe turning |
|
Whether |
|
Given argument |
|
|
|
|
|
Maybe get an existing or create a new dask.distrbuted client. |
|
|
Allows passing futures by reference - takes e.g. args and kwargs and |
|
|
Cached retrieval of remote function. |
|
Alternative for 'non-function' callables - e.g. partial |
|
Maybe get an existing or create a new RayExecutor, thus initializing, |
Module Contents¶
- cotengra.parallel._AUTO_BACKEND = None¶
- cotengra.parallel.have_loky¶
- cotengra.parallel.have_joblib¶
- cotengra.parallel._DEFAULT_BACKEND = 'loky'¶
- cotengra.parallel.get_pool(n_workers=None, maybe_create=False, backend=None)[source]¶
Get a parallel pool.
- cotengra.parallel.get_n_workers(pool=None)[source]¶
Extract how many workers our pool has (mostly for working out how many tasks to pre-dispatch).
- cotengra.parallel.set_parallel_backend(backend)[source]¶
Create a parallel pool of type
backend
which registers it as the default for'auto'
parallel.
- cotengra.parallel.maybe_leave_pool(pool)[source]¶
Logic required for nested parallelism in dask.distributed.
- cotengra.parallel.maybe_rejoin_pool(is_worker, pool)[source]¶
Logic required for nested parallelism in dask.distributed.
- cotengra.parallel.submit(pool, fn, *args, **kwargs)[source]¶
Interface for submitting
fn(*args, **kwargs)
topool
.
- cotengra.parallel.scatter(pool, data)[source]¶
Interface for maybe turning
data
into a remote object or reference.
- cotengra.parallel.ProcessPoolHandler¶
- cotengra.parallel.ThreadPoolHandler¶
- cotengra.parallel._get_pool_dask(n_workers=None, maybe_create=False)[source]¶
Maybe get an existing or create a new dask.distrbuted client.
- Parameters:
n_workers (None or int, optional) – The number of workers to request if creating a new client.
maybe_create (bool, optional) – Whether to create an new local cluster and client if no existing client is found.
- Return type:
None or dask.distributed.Client
- class cotengra.parallel.RayFuture(obj)[source]¶
Basic
concurrent.futures
like future wrapping a rayObjectRef
.- __slots__ = ('_obj', '_cancelled')¶
- _obj¶
- _cancelled = False¶
- cotengra.parallel._unpack_dispatch¶
- cotengra.parallel._unpack_futures(x)[source]¶
Allows passing futures by reference - takes e.g. args and kwargs and replaces all
RayFuture
objects with their underylingObjectRef
within all nested tuples, lists and dicts.[Subclassing
ObjectRef
might avoid needing this.]
- cotengra.parallel.get_deploy(**remote_opts)[source]¶
Alternative for ‘non-function’ callables - e.g. partial functions - pass the callable object too.
- class cotengra.parallel.RayExecutor(*args, default_remote_opts=None, **kwargs)[source]¶
Basic
concurrent.futures
like interface usingray
.- default_remote_opts¶
- _maybe_inject_remote_opts(remote_opts=None)[source]¶
Return the default remote options, possibly overriding some with those supplied by a
submit call
.
- submit(fn, *args, pure=False, remote_opts=None, **kwargs)[source]¶
Remotely run
fn(*args, **kwargs)
, returning aRayFuture
.
- cotengra.parallel._RAY_EXECUTOR = None¶
- cotengra.parallel._get_pool_ray(n_workers=None, maybe_create=False)[source]¶
Maybe get an existing or create a new RayExecutor, thus initializing, ray.
- Parameters:
n_workers (None or int, optional) – The number of workers to request if creating a new client.
maybe_create (bool, optional) – Whether to create initialize ray and return a RayExecutor if not initialized already.
- Return type:
None or RayExecutor