All Projects → bsc-wdc → dislib

bsc-wdc / dislib

Licence: Apache-2.0 license
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to dislib

Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (+307.69%)
Mutual labels:  hpc, distributed-computing
Metorikku
A simplified, lightweight ETL Framework based on Apache Spark
Stars: ✭ 361 (+825.64%)
Mutual labels:  big-data, distributed-computing
pyspark-algorithms
PySpark Algorithms Book: https://www.amazon.com/dp/B07X4B2218/ref=sr_1_2
Stars: ✭ 72 (+84.62%)
Mutual labels:  big-data, distributed-computing
Charm4py
Parallel Programming with Python and Charm++
Stars: ✭ 259 (+564.1%)
Mutual labels:  hpc, distributed-computing
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (+284.62%)
Mutual labels:  big-data, distributed-computing
Easylambda
distributed dataflows with functional list operations for data processing with C++14
Stars: ✭ 475 (+1117.95%)
Mutual labels:  hpc, distributed-computing
nebula
A distributed block-based data storage and compute engine
Stars: ✭ 127 (+225.64%)
Mutual labels:  big-data, distributed-computing
wrench
WRENCH: Cyberinfrastructure Simulation Workbench
Stars: ✭ 25 (-35.9%)
Mutual labels:  hpc, distributed-computing
Moosefs
MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Stars: ✭ 1,025 (+2528.21%)
Mutual labels:  big-data, distributed-computing
Thrill
Thrill - An EXPERIMENTAL Algorithmic Distributed Big Data Batch Processing Framework in C++
Stars: ✭ 528 (+1253.85%)
Mutual labels:  big-data, distributed-computing
future.batchtools
🚀 R package future.batchtools: A Future API for Parallel and Distributed Processing using batchtools
Stars: ✭ 77 (+97.44%)
Mutual labels:  hpc, distributed-computing
Nakedtensor
Bare bone examples of machine learning in TensorFlow
Stars: ✭ 2,443 (+6164.1%)
Mutual labels:  big-data, distributed-computing
hyperqueue
Scheduler for sub-node tasks for HPC systems with batch scheduling
Stars: ✭ 48 (+23.08%)
Mutual labels:  hpc, distributed-computing
Future
🚀 R package: future: Unified Parallel and Distributed Processing in R for Everyone
Stars: ✭ 735 (+1784.62%)
Mutual labels:  hpc, distributed-computing
ParallelUtilities.jl
Fast and easy parallel mapreduce on HPC clusters
Stars: ✭ 28 (-28.21%)
Mutual labels:  hpc, distributed-computing
sparkucx
A high-performance, scalable and efficient ShuffleManager plugin for Apache Spark, utilizing UCX communication layer
Stars: ✭ 32 (-17.95%)
Mutual labels:  big-data, hpc
Hazelcast
Open-source distributed computation and storage platform
Stars: ✭ 4,662 (+11853.85%)
Mutual labels:  big-data, distributed-computing
Geni
A Clojure dataframe library that runs on Spark
Stars: ✭ 152 (+289.74%)
Mutual labels:  big-data, distributed-computing
Selinon
An advanced distributed task flow management on top of Celery
Stars: ✭ 237 (+507.69%)
Mutual labels:  big-data, distributed-computing
ytpriv
YT metadata exporter
Stars: ✭ 28 (-28.21%)
Mutual labels:  big-data

The Distributed 
    Computing Library

Distributed computing library implemented over PyCOMPSs programming model for HPC.

   Documentation Status Build Status Code Coverage PyPI version Python version

WebsiteDocumentationReleasesSlack

Introduction

The Distributed Computing Library (dislib) provides distributed algorithms ready to use as a library. So far, dislib is highly focused on machine learning algorithms, and it is greatly inspired by scikit-learn. However, other types of numerical algorithms might be added in the future. The library has been implemented on top of PyCOMPSs programming model, and it is being developed by the Workflows and Distributed Computing group of the Barcelona Supercomputing Center. dislib allows easy local development through docker. Once the code is finished, it can be run directly on any distributed platform without any further changes. This includes clusters, supercomputers, clouds, and containerized platforms.

Contents

Quickstart

Get started with dislib following our quickstart guide.

Availability

Currently, the following supercomputers have already PyCOMPSs installed and ready to use. If you need help configuring your own cluster or supercomputer, drop us an email and we will be pleased to help.

  • Marenostrum 4 - Barcelona Supercomputing Center (BSC)
  • Minotauro - Barcelona Supercomputing Center (BSC)
  • Nord 3 - Barcelona Supercomputing Center (BSC)
  • Cobi - Barcelona Supercomputing Center (BSC)
  • Juron - Jülich Supercomputing Centre (JSC)
  • Jureca - Jülich Supercomputing Centre (JSC)
  • Ultraviolet - The Genome Analysis Center (TGAC)
  • Archer - University of Edinburgh’s Advanced Computing Facility (ACF)
  • Axiom - University of Novi Sad, Faculty of Sciences (UNSPMF)

Supported architectures:

Contributing

Contributions are welcome and very much appreciated. We are also open to starting research collaborations or mentoring if you are interested in or need assistance implementing new algorithms. Please refer to our Contribution Guide for more details.

Citing dislib

If you use dislib in a scientific publication, we would appreciate you citing the following paper:

J. Álvarez Cid-Fuentes, S. Solà, P. Álvarez, A. Castro-Ginard, and R. M. Badia, "dislib: Large Scale High Performance Machine Learning in Python," in Proceedings of the 15th International Conference on eScience, 2019, pp. 96-105

Bibtex:

@inproceedings{dislib,
            title       = {{dislib: Large Scale High Performance Machine Learning in Python}},
            author      = {Javier Álvarez Cid-Fuentes and Salvi Solà and Pol Álvarez and Alfred Castro-Ginard and Rosa M. Badia},
            booktitle   = {Proceedings of the 15th International Conference on eScience},
            pages       = {96-105},
            year        = {2019},
 }            

Acknowledgements

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement H2020-MSCA-COFUND-2016-754433.

This work has also received funding from the collaboration project between the Barcelona Supercomputing Center (BSC) and Fujitsu Ltd.

In addition, the development of this software has been also supported by the following institutions:

  • Spanish Government under contracts SEV2015-0493, TIN2015-65316 and PID2019-107255G.

  • Generalitat de Catalunya under contract 2017-SGR-01414 and the CECH project, co-funded with 50% by the European Regional Development Fund under the framework of the ERFD Operative Programme for Catalunya 2014-2020.

  • European Commission's through the following R&D projects:

    • H2020 I-BiDaaS project (Contract 780787)
    • H2020 BioExcel Center of Excellence (Contracts 823830, and 675728)
    • H2020 EuroHPC Joint Undertaking MEEP Project (Contract 946002)
    • H2020 EuroHPC Joint Undertaking eFlows4HPC Project (Contract 955558)
    • H2020 AI-Sprint project (Contract 101016577)
    • H2020 PerMedCoE Center of Excellence (Contract 951773)
    • Horizon Europe CAELESTIS project (Contract 101056886)
    • Horizon Europe DT-Geo project (Contract 101058129)

License

Apache License Version 2.0, see LICENSE

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].