All Projects → pgiri → Dispy

pgiri / Dispy

Licence: other
Distributed and Parallel Computing Framework with / for Python

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dispy

Awesome Parallel Computing
A curated list of awesome parallel computing resources
Stars: ✭ 212 (-4.5%)
Mutual labels:  parallel-computing, distributed-computing
Distributed-System-Algorithms-Implementation
Algorithms for implementation of Clock Synchronization, Consistency, Mutual Exclusion, Leader Election
Stars: ✭ 39 (-82.43%)
Mutual labels:  distributed-computing, cloud-computing
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (-87.39%)
Mutual labels:  parallel-computing, distributed-computing
pyabc
pyABC: distributed, likelihood-free inference
Stars: ✭ 13 (-94.14%)
Mutual labels:  parallel-computing, distributed-computing
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (-32.43%)
Mutual labels:  parallel-computing, distributed-computing
job stream
An MPI-based C++ or Python library for easy distributed pipeline processing
Stars: ✭ 32 (-85.59%)
Mutual labels:  parallel-computing, distributed-computing
distex
Distributed process pool for Python
Stars: ✭ 101 (-54.5%)
Mutual labels:  parallel-computing, distributed-computing
asyncoro
Python framework for asynchronous, concurrent, distributed, network programming with coroutines
Stars: ✭ 50 (-77.48%)
Mutual labels:  distributed-computing, cloud-computing
Parapet
A purely functional library to build distributed and event-driven systems
Stars: ✭ 106 (-52.25%)
Mutual labels:  parallel-computing, distributed-computing
Pwrake
Parallel Workflow extension for Rake, runs on multicores, clusters, clouds.
Stars: ✭ 57 (-74.32%)
Mutual labels:  parallel-computing, distributed-computing
Backend.ai
Backend.AI is a streamlined, container-based computing cluster orchestrator that hosts diverse programming languages and popular computing/ML frameworks, with pluggable heterogeneous accelerator support including CUDA and ROCM.
Stars: ✭ 233 (+4.95%)
Mutual labels:  cloud-computing, distributed-computing
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (-28.38%)
Mutual labels:  parallel-computing, distributed-computing
Amadeus
Harmonious distributed data analysis in Rust.
Stars: ✭ 240 (+8.11%)
Mutual labels:  parallel-computing, distributed-computing
prometheus-spec
Censorship-resistant trustless protocols for smart contract, generic & high-load computing & machine learning on top of Bitcoin
Stars: ✭ 24 (-89.19%)
Mutual labels:  distributed-computing, cloud-computing
Future
🚀 R package: future: Unified Parallel and Distributed Processing in R for Everyone
Stars: ✭ 735 (+231.08%)
Mutual labels:  parallel-computing, distributed-computing
Geni
A Clojure dataframe library that runs on Spark
Stars: ✭ 152 (-31.53%)
Mutual labels:  parallel-computing, distributed-computing
Klyng
A message-passing distributed computing framework for node.js
Stars: ✭ 167 (-24.77%)
Mutual labels:  parallel-computing, distributed-computing
Hyperactive
A hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (-18.02%)
Mutual labels:  parallel-computing
Qix
Machine Learning、Deep Learning、PostgreSQL、Distributed System、Node.Js、Golang
Stars: ✭ 13,740 (+6089.19%)
Mutual labels:  distributed-computing
Cloudskew
Create free cloud architecture diagrams
Stars: ✭ 183 (-17.57%)
Mutual labels:  cloud-computing

dispy

.. note:: Full documentation for dispy is now available at `dispy.org
          <https://dispy.org>`_.

dispy <https://dispy.org>_ is a comprehensive, yet easy to use framework for creating and using compute clusters to execute computations in parallel across multiple processors in a single machine (SMP), among many machines in a cluster, grid or cloud. dispy is well suited for data parallel (SIMD) paradigm where a computation is evaluated with different (large) datasets independently with no communication among computation tasks (except for computation tasks sending intermediate results to the client).

dispy works with Python versions 2.7+ and 3.1+ on Linux, Mac OS X and Windows; it may work on other platforms (e.g., FreeBSD and other BSD variants) too.

Features

  • dispy is implemented with pycos <https://pycos.org>, an independent framework for asynchronous, concurrent, distributed, network programming with tasks (without threads). pycos uses non-blocking sockets with I/O notification mechanisms epoll, kqueue and poll, and Windows I/O Completion Ports (IOCP) for high performance and scalability, so dispy works efficiently with a single node or large cluster(s) of nodes. pycos itself has support for distributed/parallel computing, including transferring computations, files etc., and message passing (for communicating with client and other computation tasks). While dispy can be used to schedule jobs of a computation to get the results, pycos can be used to create distributed communicating processes <https://pycos.org/dispycos.html>, for broad range of use cases.

  • Computations (Python functions or standalone programs) and their dependencies (files, Python functions, classes, modules) are distributed automatically.

  • Computation nodes can be anywhere on the network (local or remote). For security, either simple hash based authentication or SSL encryption can be used.

  • After each execution is finished, the results of execution, output, errors and exception trace are made available for further processing.

  • Nodes may become available dynamically: dispy will schedule jobs whenever a node is available and computations can use that node.

  • If callback function is provided, dispy executes that function when a job is finished; this can be used for processing job results as they become available.

  • Client-side and server-side fault recovery are supported:

    If user program (client) terminates unexpectedly (e.g., due to uncaught exception), the nodes continue to execute scheduled jobs. If client-side fault recover option is used when creating a cluster, the results of the scheduled (but unfinished at the time of crash) jobs for that cluster can be retrieved later.

    If a computation is marked reentrant when a cluster is created and a node (server) executing jobs for that computation fails, dispy automatically resubmits those jobs to other available nodes.

  • dispy can be used in a single process to use all the nodes exclusively (with JobCluster - simpler to use) or in multiple processes simultaneously sharing the nodes (with SharedJobCluster and dispyscheduler program).

  • Cluster can be monitored and managed <https:/dispy.org/httpd.html>_ with web browser.

Dependencies

dispy requires pycos_ for concurrent, asynchronous network programming with tasks. pycos is automatically installed if dispy is installed with pip. Under Windows efficient polling notifier I/O Completion Ports (IOCP) is supported only if pywin32 <https://github.com/mhammond/pywin32>_ is installed; otherwise, inefficient select notifier is used.

Installation

To install dispy, run::

python -m pip install dispy

Release Notes

Short summary of changes for each release can be found at News <https://pycos.com/forum/viewforum.php?f=11>. Detailed logs / changes are at github commits <https://github.com/pgiri/dispy/commits/master>.

Authors

  • Giridhar Pemmasani

Links

  • Documentation is at dispy.org_.
  • Examples <https://dispy.org/examples.html>_.
  • Github (Code Respository) <https://github.com/pgiri/dispy>_.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].