helmholtz-analytics / heat

Licence: MIT license
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to heat

soundstorm
The Federated Social Audio Platform
Stars: ✭ 26 (-79.53%)
Mutual labels:  distributed
FedScale
FedScale is a scalable and extensible open-source federated learning (FL) platform.
Stars: ✭ 274 (+115.75%)
Mutual labels:  distributed
rockgo
A developing game server framework,based on Entity Component System(ECS).
Stars: ✭ 617 (+385.83%)
Mutual labels:  distributed
intelli-swift-core
Distributed, Column-oriented storage, Realtime analysis, High performance Database
Stars: ✭ 17 (-86.61%)
Mutual labels:  distributed
go-cita
A Go implementation of CITA. https://docs.nervos.org/cita
Stars: ✭ 25 (-80.31%)
Mutual labels:  distributed
caffe-simnets
The SimNets Architecture's Implementation in Caffe
Stars: ✭ 13 (-89.76%)
Mutual labels:  tensors
orbit-db-cli
CLI for orbit-db
Stars: ✭ 60 (-52.76%)
Mutual labels:  distributed
meesee
Task queue, Long lived workers for work based parallelization, with processes and Redis as back-end. For distributed computing.
Stars: ✭ 14 (-88.98%)
Mutual labels:  distributed
GraviT
GraviT is a distributed ray tracing framework that enables applications to leverage hardware-optimized ray tracers within a single environment across many nodes for large-scale rendering tasks.
Stars: ✭ 18 (-85.83%)
Mutual labels:  distributed
goimpulse
高可用,高性能的分布式发号服务
Stars: ✭ 17 (-86.61%)
Mutual labels:  distributed
elfo
Your next actor system
Stars: ✭ 38 (-70.08%)
Mutual labels:  distributed
Abacus
Advanced Combinatorics and Algebraic Number Theory Symbolic Computation library for JavaScript, Python
Stars: ✭ 16 (-87.4%)
Mutual labels:  tensors
tips
TiKV based Pub/Sub server
Stars: ✭ 31 (-75.59%)
Mutual labels:  distributed
p2p-project
A peer-to-peer networking framework to work across languages
Stars: ✭ 68 (-46.46%)
Mutual labels:  distributed
DemonHunter
Distributed Honeypot
Stars: ✭ 54 (-57.48%)
Mutual labels:  distributed
pooljs
Browser computing unleashed!
Stars: ✭ 17 (-86.61%)
Mutual labels:  distributed
FastNN
FastNN provides distributed training examples that use EPL.
Stars: ✭ 79 (-37.8%)
Mutual labels:  distributed
sprawl
Alpha implementation of the Sprawl distributed marketplace protocol.
Stars: ✭ 27 (-78.74%)
Mutual labels:  distributed
toy-rpc
Java基于Netty,Protostuff和Zookeeper实现分布式RPC框架
Stars: ✭ 55 (-56.69%)
Mutual labels:  distributed
WeIdentity
基于区块链的符合W3C DID和Verifiable Credential规范的分布式身份解决方案
Stars: ✭ 1,063 (+737.01%)
Mutual labels:  distributed

Heat is a distributed tensor framework for high performance data analytics.

Project Status

Mirror and run GitLab CI Documentation Status codecov Code style: black license: MIT Downloads

Goals

Heat is a flexible and seamless open-source software for high performance data analytics and machine learning. It provides highly optimized algorithms and data structures for tensor computations using CPUs, GPUs and distributed cluster systems on top of MPI. The goal of Heat is to fill the gap between data analytics and machine learning libraries with a strong focus on single-node performance, and traditional high-performance computing (HPC). Heat's generic Python-first programming interface integrates seamlessly with the existing data science ecosystem and makes it as effortless as using numpy to write scalable scientific and data science applications.

Heat allows you to tackle your actual Big Data challenges that go beyond the computational and memory needs of your laptop and desktop.

Features

  • High-performance n-dimensional tensors
  • CPU, GPU and distributed computation using MPI
  • Powerful data analytics and machine learning methods
  • Abstracted communication via split tensors
  • Python API

Getting Started

Check out our Jupyter Notebook tutorial right here on Github or in the /scripts directory.

The complete documentation of the latest version is always deployed on Read the Docs.

Support Channels

We use StackOverflow as a forum for questions about Heat. If you do not find an answer to your question, then please ask a new question there and be sure to tag it with "pyheat".

You can also reach us on GitHub Discussions.

Requirements

Heat requires Python 3.7 or newer. Heat is based on PyTorch. Specifically, we are exploiting PyTorch's support for GPUs and MPI parallelism. For MPI support we utilize mpi4py. Both packages can be installed via pip or automatically using the setup.py.

Installation

Tagged releases are made available on the Python Package Index (PyPI). You can typically install the latest version with

$ pip install heat[hdf5,netcdf]

where the part in brackets is a list of optional dependencies. You can omit it, if you do not need HDF5 or NetCDF support.

It is recommended to use the most recent supported version of PyTorch!

It is also very important to ensure that the PyTorch version is compatible with the local CUDA installation. More information can be found here.

Hacking

If you want to work with the development version, you can check out the sources using

$ git clone https://github.com/helmholtz-analytics/heat.git

The installation can then be done from the checked-out sources with

$ pip install .[hdf5,netcdf,dev]

We welcome contributions from the community, please check out our Contribution Guidelines before getting started!

License

Heat is distributed under the MIT license, see our LICENSE file.

Citing Heat

If you find Heat helpful for your research, please mention it in your publications. You can cite:

  • Götz, M., Debus, C., Coquelin, D., Krajsek, K., Comito, C., Knechtges, P., Hagemeier, B., Tarnawa, M., Hanselmann, S., Siggel, S., Basermann, A. & Streit, A. (2020). HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 276-287). IEEE, DOI: 10.1109/BigData50022.2020.9378050.
@inproceedings{heat2020,
    title={{HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics}},
    author={
      Markus Götz and
      Charlotte Debus and
      Daniel Coquelin and
      Kai Krajsek and
      Claudia Comito and
      Philipp Knechtges and
      Björn Hagemeier and
      Michael Tarnawa and
      Simon Hanselmann and
      Martin Siggel and
      Achim Basermann and
      Achim Streit
    },
    booktitle={2020 IEEE International Conference on Big Data (Big Data)},
    year={2020},
    pages={276-287},
    month={December},
    publisher={IEEE},
    doi={10.1109/BigData50022.2020.9378050}
}

Acknowledgements

This work is supported by the Helmholtz Association Initiative and Networking Fund under project number ZT-I-0003 and the Helmholtz AI platform grant.


Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].