All Projects → tomMoral → dicodile

tomMoral / dicodile

Licence: BSD-3-Clause License
Experiments for "Distributed Convolutional Dictionary Learning (DiCoDiLe): Pattern Discovery in Large Images and Signals"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to dicodile

phylanx
An Asynchronous Distributed C++ Array Processing Toolkit
Stars: ✭ 71 (+373.33%)
Mutual labels:  distributed-computing
zmq
ZeroMQ based distributed patterns
Stars: ✭ 27 (+80%)
Mutual labels:  distributed-computing
QCFractal
A distributed compute and database platform for quantum chemistry.
Stars: ✭ 107 (+613.33%)
Mutual labels:  distributed-computing
cre
common runtime environment for distributed programming languages
Stars: ✭ 20 (+33.33%)
Mutual labels:  distributed-computing
JOLI.jl
Julia Operators LIbrary
Stars: ✭ 14 (-6.67%)
Mutual labels:  distributed-computing
protoactor-go
Proto Actor - Ultra fast distributed actors for Go, C# and Java/Kotlin
Stars: ✭ 4,138 (+27486.67%)
Mutual labels:  distributed-computing
lazycluster
🎛 Distributed machine learning made simple.
Stars: ✭ 43 (+186.67%)
Mutual labels:  distributed-computing
dlsa
Distributed least squares approximation (dlsa) implemented with Apache Spark
Stars: ✭ 25 (+66.67%)
Mutual labels:  distributed-computing
asyncoro
Python framework for asynchronous, concurrent, distributed, network programming with coroutines
Stars: ✭ 50 (+233.33%)
Mutual labels:  distributed-computing
distex
Distributed process pool for Python
Stars: ✭ 101 (+573.33%)
Mutual labels:  distributed-computing
ripple
Simple shared surface streaming application
Stars: ✭ 17 (+13.33%)
Mutual labels:  distributed-computing
pat-helland-and-me
Materials related to my talk "Pat Helland and Me"
Stars: ✭ 14 (-6.67%)
Mutual labels:  distributed-computing
Prime95
Prime95 source code from GIMPS to find Mersenne Prime.
Stars: ✭ 25 (+66.67%)
Mutual labels:  distributed-computing
Archived-SANSA-ML
SANSA Machine Learning Layer
Stars: ✭ 39 (+160%)
Mutual labels:  distributed-computing
Distributed-Data-Structures
[GSoC] Distributed Data Structures - Collections Framework for Chapel language
Stars: ✭ 13 (-13.33%)
Mutual labels:  distributed-computing
Federated-Learning-and-Split-Learning-with-raspberry-pi
SRDS 2020: End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
Stars: ✭ 54 (+260%)
Mutual labels:  distributed-computing
hyperqueue
Scheduler for sub-node tasks for HPC systems with batch scheduling
Stars: ✭ 48 (+220%)
Mutual labels:  distributed-computing
kar
KAR: A Runtime for the Hybrid Cloud
Stars: ✭ 18 (+20%)
Mutual labels:  distributed-computing
swarm-learning
A simplified library for decentralized, privacy preserving machine learning
Stars: ✭ 142 (+846.67%)
Mutual labels:  distributed-computing
IoTPy
Python for streams
Stars: ✭ 24 (+60%)
Mutual labels:  distributed-computing

Build Status codecov

This package is still under development. If you have any trouble running this code, please open an issue on GitHub.

DiCoDiLe

Package to run the experiments for the preprint paper Distributed Convolutional Dictionary Learning (DiCoDiLe): Pattern Discovery in Large Images and Signals.

Installation

All the tests should work with python >=3.6. This package depends on the python library numpy, matplotlib, scipy, mpi4py, joblib. The package can be installed with the following command run from the root of the package.

pip install  -e .

Or using the conda environment:

conda env create -f dicodile_env.yml

To build the doc use:

pip install  -e .[doc]
cd docs
make html

To run the tests:

pip install  -e .[test]
pytest .

Usage

All experiments are with mpi4py and will try to spawned workers depending on the parameters set in the experiments. If you need to use an hostfile to configure indicate to MPI where to spawn the new workers, you can set the environment variable MPI_HOSTFILE=/path/to/the/hostfile and it will be automatically detected in all the experiments. Note that for each experiments you should provide enough workers to allow the script to run.

All figures can be generated using scripts in benchmarks. Each script will generate and save the data to reproduce the figure. The figure can then be plotted by re-running the same script with the argument --plot. The figures are saved in pdf in the benchmarks_results folder. The computation are cached with joblib to be robust to failures.

Note

Open MPI tries to use all up network interfaces. This might cause the program to hang due to virtual network interfaces which could not actually be used to communicate with MPI processes. For more info Open MPI FAQ.

In case your program hangs, you can launch computation with the mpirun command:

  • either spefifying usable interfaces using --mca btl_tcp_if_include parameter:
$ mpirun -np 1 \
         --mca btl_tcp_if_include wlp2s0 \
         --hostfile hostfile \
         python -m mpi4py examples/plot_mandrill.py
  • or by excluding the virtual interfaces using --mca btl_tcp_if_exclude parameter:
$ mpirun -np 1 \
         --mca btl_tcp_if_exclude docker0 \
         --hostfile hostfile \
         python -m mpi4py examples/plot_mandrill.py

Alternatively, you can also restrict the used interface by setting environment variables OMPI_MCA_btl_tcp_if_include or OMPI_MCA_btl_tcp_if_exclude

$ export OMPI_MCA_btl_tcp_if_include="wlp2s0"

$ export OMPI_MCA_btl_tcp_if_exclude="docker0"``
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].