All Projects → AshwinRJ → Federated Learning Pytorch

AshwinRJ / Federated Learning Pytorch

Licence: mit
Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Federated Learning Pytorch

blockchain-reading-list
A reading list on blockchain and related technologies, targeted at technical people who want a deep understanding of those topics.
Stars: ✭ 93 (-67.48%)
Mutual labels:  distributed-computing
CipherCompute
The free EAP version of the Cosmian Collaborative Confidential Computing platform. Try it!
Stars: ✭ 20 (-93.01%)
Mutual labels:  distributed-computing
data-parallelism
juliafolds.github.io/data-parallelism/
Stars: ✭ 22 (-92.31%)
Mutual labels:  distributed-computing
PFL-Non-IID
The origin of the Non-IID phenomenon is the personalization of users, who generate the Non-IID data. With Non-IID (Not Independent and Identically Distributed) issues existing in the federated learning setting, a myriad of approaches has been proposed to crack this hard nut. In contrast, the personalized federated learning may take the advantage…
Stars: ✭ 58 (-79.72%)
Mutual labels:  distributed-computing
frovedis
Framework of vectorized and distributed data analytics
Stars: ✭ 59 (-79.37%)
Mutual labels:  distributed-computing
Awesome-Federated-Machine-Learning
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
Stars: ✭ 190 (-33.57%)
Mutual labels:  distributed-computing
DevOps
DevOps code to deploy eScience services
Stars: ✭ 19 (-93.36%)
Mutual labels:  distributed-computing
Tdigest
t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
Stars: ✭ 274 (-4.2%)
Mutual labels:  distributed-computing
server
Hashtopolis - A Hashcat wrapper for distributed hashcracking
Stars: ✭ 954 (+233.57%)
Mutual labels:  distributed-computing
interbit
To the end of servers
Stars: ✭ 23 (-91.96%)
Mutual labels:  distributed-computing
easyFL
An experimental platform to quickly realize and compare with popular centralized federated learning algorithms. A realization of federated learning algorithm on fairness (FedFV, Federated Learning with Fair Averaging, https://fanxlxmu.github.io/publication/ijcai2021/) was accepted by IJCAI-21 (https://www.ijcai.org/proceedings/2021/223).
Stars: ✭ 104 (-63.64%)
Mutual labels:  distributed-computing
bloomfilter
Bloomfilter written in Golang, includes rotation and RPC
Stars: ✭ 61 (-78.67%)
Mutual labels:  distributed-computing
mobius
Mobius is an AI infra platform including realtime computing and training.
Stars: ✭ 22 (-92.31%)
Mutual labels:  distributed-computing
realtimemap-dotnet
A showcase for Proto.Actor - an ultra-fast distributed actors solution for Go, C#, and Java/Kotlin.
Stars: ✭ 47 (-83.57%)
Mutual labels:  distributed-computing
Charm4py
Parallel Programming with Python and Charm++
Stars: ✭ 259 (-9.44%)
Mutual labels:  distributed-computing
Distributed-System-Algorithms-Implementation
Algorithms for implementation of Clock Synchronization, Consistency, Mutual Exclusion, Leader Election
Stars: ✭ 39 (-86.36%)
Mutual labels:  distributed-computing
SadlyDistributed
Distributing your code(soul), in almost any language(state), among a cluster of idle browsers(voids)
Stars: ✭ 20 (-93.01%)
Mutual labels:  distributed-computing
Awesome Distributed Deep Learning
A curated list of awesome Distributed Deep Learning resources.
Stars: ✭ 277 (-3.15%)
Mutual labels:  distributed-computing
Gleam
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.
Stars: ✭ 2,949 (+931.12%)
Mutual labels:  distributed-computing
SciFlow
Scientific workflow management
Stars: ✭ 49 (-82.87%)
Mutual labels:  distributed-computing

Federated-Learning (PyTorch)

Implementation of the vanilla federated learning paper : Communication-Efficient Learning of Deep Networks from Decentralized Data.

Experiments are produced on MNIST, Fashion MNIST and CIFAR10 (both IID and non-IID). In case of non-IID, the data amongst the users can be split equally or unequally.

Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.

Requirments

Install all the packages from requirments.txt

  • Python3
  • Pytorch
  • Torchvision

Data

  • Download train and test datasets manually or they will be automatically downloaded from torchvision datasets.
  • Experiments are run on Mnist, Fashion Mnist and Cifar.
  • To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.

Running the experiments

The baseline experiment trains the model in the conventional way.

  • To run the baseline experiment with MNIST on MLP using CPU:
python src/baseline_main.py --model=mlp --dataset=mnist --epochs=10
  • Or to run it on GPU (eg: if gpu:0 is available):
python src/baseline_main.py --model=mlp --dataset=mnist --gpu=0 --epochs=10

Federated experiment involves training a global model using many local models.

  • To run the federated experiment with CIFAR on CNN (IID):
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=1 --epochs=10
  • To run the same experiment under non-IID condition:
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=0 --epochs=10

You can change the default values of other parameters to simulate different conditions. Refer to the options section.

Options

The default values for various paramters parsed to the experiment are given in options.py. Details are given some of those parameters:

  • --dataset: Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
  • --model: Default: 'mlp'. Options: 'mlp', 'cnn'
  • --gpu: Default: None (runs on CPU). Can also be set to the specific gpu id.
  • --epochs: Number of rounds of training.
  • --lr: Learning rate set to 0.01 by default.
  • --verbose: Detailed log outputs. Activated by default, set to 0 to deactivate.
  • --seed: Random Seed. Default set to 1.

Federated Parameters

  • --iid: Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
  • --num_users:Number of users. Default is 100.
  • --frac: Fraction of users to be used for federated updates. Default is 0.1.
  • --local_ep: Number of local training epochs in each user. Default is 10.
  • --local_bs: Batch size of local updates in each user. Default is 10.
  • --unequal: Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.

Results on MNIST

Baseline Experiment:

The experiment involves training a single model in the conventional way.

Parameters:

  • Optimizer: : SGD
  • Learning Rate: 0.01

Table 1: Test accuracy after training for 10 epochs:

Model Test Acc
MLP 92.71%
CNN 98.42%

Federated Experiment:

The experiment involves training a global model in the federated setting.

Federated parameters (default values):

  • Fraction of users (C): 0.1
  • Local Batch size (B): 10
  • Local Epochs (E): 10
  • Optimizer: SGD
  • Learning Rate: 0.01

Table 2: Test accuracy after training for 10 global epochs with:

Model IID Non-IID (equal)
MLP 88.38% 73.49%
CNN 97.28% 75.94%

Further Readings

Papers:

Blog Posts:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].