All Projects → UIUC-PPL → Charm4py

UIUC-PPL / Charm4py

Licence: other
Parallel Programming with Python and Charm++

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Charm4py

Charm
The Charm++ parallel programming system. Visit https://charmplusplus.org/ for more information.
Stars: ✭ 96 (-62.93%)
Mutual labels:  asynchronous-tasks, runtime, hpc
Easylambda
distributed dataflows with functional list operations for data processing with C++14
Stars: ✭ 475 (+83.4%)
Mutual labels:  distributed-computing, hpc
Raftlib
The RaftLib C++ library, streaming/dataflow concurrency via C++ iostream-like operators
Stars: ✭ 717 (+176.83%)
Mutual labels:  runtime, hpc
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (-38.61%)
Mutual labels:  distributed-computing, hpc
hyperqueue
Scheduler for sub-node tasks for HPC systems with batch scheduling
Stars: ✭ 48 (-81.47%)
Mutual labels:  hpc, distributed-computing
wrench
WRENCH: Cyberinfrastructure Simulation Workbench
Stars: ✭ 25 (-90.35%)
Mutual labels:  hpc, distributed-computing
Future
🚀 R package: future: Unified Parallel and Distributed Processing in R for Everyone
Stars: ✭ 735 (+183.78%)
Mutual labels:  distributed-computing, hpc
dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.
Stars: ✭ 39 (-84.94%)
Mutual labels:  hpc, distributed-computing
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (-89.19%)
Mutual labels:  hpc, distributed-computing
future.batchtools
🚀 R package future.batchtools: A Future API for Parallel and Distributed Processing using batchtools
Stars: ✭ 77 (-70.27%)
Mutual labels:  hpc, distributed-computing
ph-commons
Java 1.8+ Library with tons of utility classes required in all projects
Stars: ✭ 23 (-91.12%)
Mutual labels:  runtime
Awesome-Federated-Machine-Learning
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
Stars: ✭ 190 (-26.64%)
Mutual labels:  distributed-computing
interbit
To the end of servers
Stars: ✭ 23 (-91.12%)
Mutual labels:  distributed-computing
Ddetours
Delphi Detours Library
Stars: ✭ 256 (-1.16%)
Mutual labels:  runtime
SharpLoader
🔮 [C#] Source code randomizer and compiler
Stars: ✭ 36 (-86.1%)
Mutual labels:  runtime
SciFlow
Scientific workflow management
Stars: ✭ 49 (-81.08%)
Mutual labels:  distributed-computing
about
华科七边形,欢迎各位朋友的指导与交流。
Stars: ✭ 15 (-94.21%)
Mutual labels:  hpc
hemelb
A high performance parallel lattice-Boltzmann code for large scale fluid flow in complex geometries
Stars: ✭ 13 (-94.98%)
Mutual labels:  hpc
easybuild-framework
EasyBuild is a software installation framework in Python that allows you to install software in a structured and robust way.
Stars: ✭ 117 (-54.83%)
Mutual labels:  hpc
Blitz
Blitz++ Multi-Dimensional Array Library for C++
Stars: ✭ 257 (-0.77%)
Mutual labels:  hpc

======== Charm4py

.. image:: https://github.com/UIUC-PPL/charm4py/actions/workflows/charm4py.yml/badge.svg?event=push :target: https://github.com/UIUC-PPL/charm4py/actions/workflows/charm4py.yml

.. image:: http://readthedocs.org/projects/charm4py/badge/?version=latest :target: https://charm4py.readthedocs.io/

.. image:: https://img.shields.io/pypi/v/charm4py.svg :target: https://pypi.python.org/pypi/charm4py/

Charm4py (Charm++ for Python -formerly CharmPy-) is a distributed computing and parallel programming framework for Python, for the productive development of fast, parallel and scalable applications. It is built on top of Charm++_, a C++ adaptive runtime system that has seen extensive use in the scientific and high-performance computing (HPC) communities across many disciplines, and has been used to develop applications that run on a wide range of devices: from small multi-core devices up to the largest supercomputers.

Please see the Documentation_ for more information.

Short Example

The following computes Pi in parallel, using any number of machines and processors:

.. code-block:: python

from charm4py import charm, Chare, Group, Reducer, Future
from math import pi
import time

class Worker(Chare):

    def work(self, n_steps, pi_future):
        h = 1.0 / n_steps
        s = 0.0
        for i in range(self.thisIndex, n_steps, charm.numPes()):
            x = h * (i + 0.5)
            s += 4.0 / (1.0 + x**2)
        # perform a reduction among members of the group, sending the result to the future
        self.reduce(pi_future, s * h, Reducer.sum)

def main(args):
    n_steps = 1000
    if len(args) > 1:
        n_steps = int(args[1])
    mypi = Future()
    workers = Group(Worker)  # create one instance of Worker on every processor
    t0 = time.time()
    workers.work(n_steps, mypi)  # invoke 'work' method on every worker
    print('Approximated value of pi is:', mypi.get(),  # 'get' blocks until result arrives
          'Error is', abs(mypi.get() - pi), 'Elapsed time=', time.time() - t0)
    exit()

charm.start(main)

This is a simple example and demonstrates only a few features of Charm4py. Some things to note from this example:

  • Chares (pronounced chars) are distributed Python objects.
  • A Group is a type of distributed collection where one instance of the specified chare type is created on each processor.
  • Remote method invocation in Charm4py is asynchronous.

In this example, there is only one chare per processor, but multiple chares (of the same or different type) can exist on any given processor, which can bring flexibility and also performance benefits (like dynamic load balancing). Please refer to the documentation_ for more information.

Contact

We would like feedback from the community. If you have feature suggestions, support questions or general comments, please visit our forum_ or emails us at [email protected], or

Main author at [email protected]

.. _Charm++: https://github.com/UIUC-PPL/charm

.. _Documentation: https://charm4py.readthedocs.io

.. _forum: https://charm.discourse.group/c/charm4py

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].