All Projects → gaborvecsei → Federated-Learning-Mini-Framework

gaborvecsei / Federated-Learning-Mini-Framework

Licence: MIT License
Federated Learning mini-framework with Keras

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Federated-Learning-Mini-Framework

PFL-Non-IID
The origin of the Non-IID phenomenon is the personalization of users, who generate the Non-IID data. With Non-IID (Not Independent and Identically Distributed) issues existing in the federated learning setting, a myriad of approaches has been proposed to crack this hard nut. In contrast, the personalized federated learning may take the advantage…
Stars: ✭ 58 (+52.63%)
Mutual labels:  cifar10, federated-learning
KD3A
Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation".
Stars: ✭ 63 (+65.79%)
Mutual labels:  federated-learning
Classification Nets
Implement popular models by different DL framework. Such as tensorflow and caffe
Stars: ✭ 17 (-55.26%)
Mutual labels:  cifar10
fedpa
Federated posterior averaging implemented in JAX
Stars: ✭ 38 (+0%)
Mutual labels:  federated-learning
communication-in-cross-silo-fl
Official code for "Throughput-Optimal Topology Design for Cross-Silo Federated Learning" (NeurIPS'20)
Stars: ✭ 19 (-50%)
Mutual labels:  federated-learning
FedReID
Implementation of Federated Learning to Person Re-identification (Code for ACMMM 2020 paper)
Stars: ✭ 68 (+78.95%)
Mutual labels:  federated-learning
Federated-Learning-and-Split-Learning-with-raspberry-pi
SRDS 2020: End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
Stars: ✭ 54 (+42.11%)
Mutual labels:  federated-learning
FedLab-benchmarks
Standard federated learning implementations in FedLab and FL benchmarks.
Stars: ✭ 49 (+28.95%)
Mutual labels:  federated-learning
DenseNet-Cifar10
Train DenseNet on Cifar-10 based on Keras
Stars: ✭ 39 (+2.63%)
Mutual labels:  cifar10
substra
Substra is a framework for traceable ML orchestration on decentralized sensitive data.
Stars: ✭ 143 (+276.32%)
Mutual labels:  federated-learning
pFedMe
Personalized Federated Learning with Moreau Envelopes (pFedMe) using Pytorch (NeurIPS 2020)
Stars: ✭ 196 (+415.79%)
Mutual labels:  federated-learning
decentralized-ml
Full stack service enabling decentralized machine learning on private data
Stars: ✭ 50 (+31.58%)
Mutual labels:  federated-learning
resnet-cifar10
ResNet for Cifar10
Stars: ✭ 21 (-44.74%)
Mutual labels:  cifar10
ReZero-ResNet
Unofficial pytorch implementation of ReZero in ResNet
Stars: ✭ 23 (-39.47%)
Mutual labels:  cifar10
backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Stars: ✭ 181 (+376.32%)
Mutual labels:  federated-learning
federated-learning-poc
Proof of Concept of a Federated Learning framework that maintains the privacy of the participants involved.
Stars: ✭ 13 (-65.79%)
Mutual labels:  federated-learning
keras-deep-learning
Various implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-42.11%)
Mutual labels:  cifar10
deeplearning-mpo
Replace FC2, LeNet-5, VGG, Resnet, Densenet's full-connected layers with MPO
Stars: ✭ 26 (-31.58%)
Mutual labels:  cifar10
Keras-CIFAR10
practice on CIFAR10 with Keras
Stars: ✭ 25 (-34.21%)
Mutual labels:  cifar10
MOON
Model-Contrastive Federated Learning (CVPR 2021)
Stars: ✭ 93 (+144.74%)
Mutual labels:  federated-learning

Federated Learning mini-framework

This repo contains a Federated Learning (FL) setup with the Keras (Tensorflow) framework. The purpose is to have the codebase with which you can run FL experiments easily, for both IID and Non-IID data.

The two main components are: Server and Client. The Server contains the model description, distributes the data and coordinates the learning. And for all the clients it summarizes the results to update it's own (global) model. The Clients have different random chunks of data and the model description with the global model's weights. From this initialized status they can start the training on their own dataset for a few iterations. In a real world scenario the clients are edge devices and the training is running in parallel.

In this setup the client trainings are running sequentially and you can use only your CPU or just 1 GPU.

Cifar10 - "Shallow" VGG16

Training with a shallow version of VGG16 on Cifar10 with IID data where we had 100 clients and for each round (global epoch) we used only 10% of them (selected randomly at each communication round). Every client fitted 1 epoch on "their part" of the data with the batch size of [blue: 8, orange: 64, gray: 256] and with learning rate of 0.1.

A "single model" training (1 client with all the data) is also shown (red) on the graph. Batch size was 256 and the learning rate was: 0.05.

(The Tensorboard logs are (for each experiment) included in the release, so you can easily visualize them.)

About

Gábor Vecsei

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].