All Projects → nnstreamer → nntrainer

nnstreamer / nntrainer

Licence: Apache-2.0 license
NNtrainer is Software Framework for Training Neural Network Models on Devices.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
Meson
512 projects
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to nntrainer

Age-Gender Estimation TF-Android
Age + Gender Estimation on Android with TensorFlow Lite
Stars: ✭ 34 (-63.04%)
Mutual labels:  tensorflow-lite
training-microservices
Node.js Microservices training
Stars: ✭ 50 (-45.65%)
Mutual labels:  training
intelligence-icons
intelligence-icons is a collection of icons and diagrams for building training and marketing materials around Intelligence sharing; including but not limited to CTI, MISP Threat Sharing, STIX 2.
Stars: ✭ 32 (-65.22%)
Mutual labels:  intelligence
sagemaker-xgboost-container
This is the Docker container based on open source framework XGBoost (https://xgboost.readthedocs.io/en/latest/) to allow customers use their own XGBoost scripts in SageMaker.
Stars: ✭ 93 (+1.09%)
Mutual labels:  training
hello-kubecon
A Charmed Operator demonstration for Operator Day 2021, hosted by Canonical
Stars: ✭ 14 (-84.78%)
Mutual labels:  training
Korean-OCR-Model-Design-based-on-Keras-CNN
Korean OCR Model Design(한글 OCR 모델 설계)
Stars: ✭ 34 (-63.04%)
Mutual labels:  training
TIGMINT
TIGMINT: OSINT (Open Source Intelligence) GUI software framework
Stars: ✭ 195 (+111.96%)
Mutual labels:  intelligence
kedro-training
Find documentation and a template project for delivering Kedro training.
Stars: ✭ 26 (-71.74%)
Mutual labels:  training
HPC
A collection of various resources, examples, and executables for the general NREL HPC user community's benefit. Use the following website for accessing documentation.
Stars: ✭ 64 (-30.43%)
Mutual labels:  training
kubernetes-localdev
Create a local Kubernetes development environment on macOS or Windows and WSL2, including HTTPS/TLS and OAuth2/OIDC authentication.
Stars: ✭ 210 (+128.26%)
Mutual labels:  training
TPU-MobilenetSSD
Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Stars: ✭ 82 (-10.87%)
Mutual labels:  tensorflow-lite
face-detection-tflite
Face and iris detection for Python based on MediaPipe
Stars: ✭ 78 (-15.22%)
Mutual labels:  tensorflow-lite
responsivebootstrap
This is the repository for my course, Bootstrap Layouts: Responsive Single-Page Design on LinkedIn Learning and Lynda.com.
Stars: ✭ 49 (-46.74%)
Mutual labels:  training
rankpruning
🧹 Formerly for binary classification with noisy labels. Replaced by cleanlab.
Stars: ✭ 81 (-11.96%)
Mutual labels:  training
Python-Studies
All studies about python
Stars: ✭ 56 (-39.13%)
Mutual labels:  training
ova-server
OpenVisionAPI server
Stars: ✭ 93 (+1.09%)
Mutual labels:  tensorflow-lite
tizen-studio-arch
How to install TizenStudio on Arch Linux
Stars: ✭ 31 (-66.3%)
Mutual labels:  tizen
formations
Supports de cours des formations OpenStack et conteneurs de la société alter way
Stars: ✭ 43 (-53.26%)
Mutual labels:  training
texas-poker-engine
Dummy Texas Poker Engine open source edition
Stars: ✭ 4 (-95.65%)
Mutual labels:  intelligence
chainer-fcis
[This project has moved to ChainerCV] Chainer Implementation of Fully Convolutional Instance-aware Semantic Segmentation
Stars: ✭ 45 (-51.09%)
Mutual labels:  training

NNtrainer

Code Coverage GitHub repo size GitHub issues GitHub pull requests Coverity Scan Build Status DailyBuild

NNtrainer is a Software Framework for training Neural Network models on devices.

Overview

NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network from the scratch, NNtrainer finetunes the neural network model on device with user data for the personalization.

Even if NNtariner runs on device, it provides full functionalities to train models and also utilizes limited device resources efficiently. NNTrainer is able to train various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression, Reinforcement Learning algorithms, Recurrent network and more. We also provide examples for various tasks such as Few-shot learning, ResNet, VGG, Product Rating and more will be added. All of these were tested on Samsung Galaxy smart phone with Android and PC (Ubuntu 18.04/20.04).

NNTrainer: Light-Weight On-Device Training Framework , arXiv, 2022
NNTrainer: Towards the on-device learning for personalization , Samsung Software Developer Conference 2021 (Korean)
NNTrainer: Personalize neural networks on devices! , Samsung Developer Conference 2021
NNTrainer: "On-device learning" , Samsung AI Forum 2021

Official Releases

Tizen Ubuntu Android/NDK Build
6.0M2 and later 18.04 9/P
arm armv7l badge Available Ready
arm64 aarch64 badge Available android badge
x64 x64 badge ubuntu badge Ready
x86 x86 badge N/A N/A
Publish Tizen Repo PPA
API C (Official) C/C++ C/C++
  • Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
  • Available: binary packages are released and deployed automatically and periodically along with CI tests.
  • Daily Release
  • SDK Support: Tizen Studio (6.0 M2+)

Maintainer

Reviewers

Components

Supported Layers

This component defines layers which consist of a neural network model. Layers have their own properties to be set.

Keyword Layer Class Name Description
conv1d Conv1DLayer Convolution 1-Dimentional Layer
conv2d Conv2DLayer Convolution 2-Dimentional Layer
pooling2d Pooling2DLayer Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling
flatten FlattenLayer Flatten layer
fully_connected FullyConnectedLayer Fully connected layer
pooling2D Pooling2DLayer Pooling 2D layer
input InputLayer Input Layer. This is not always required.
batch_normalization BatchNormalizationLayer Batch normalization layer
layer_normalization LayerNormalizationLayer Layer normalization layer
activation ActivaitonLayer Set by layer property
addition AdditionLayer Add input input layers
attention AttentionLayer Attenstion layer
centroid_knn CentroidKNN Centroid K-nearest neighbor layer
concat ConcatLayer Concatenate input layers
multiout MultiOutLayer Multi-Output Layer
backbone_nnstreamer NNStreamerLayer Encapsulate NNStreamer layer
backbone_tflite TfLiteLayer Encapsulate tflite as a layer
permute PermuteLayer Permute layer for transpose
preprocess_flip PreprocessFlipLayer Preprocess random flip layer
preprocess_l2norm PreprocessL2NormLayer Preprocess simple l2norm layer to normalize
preprocess_translate PreprocessTranslateLayer Preprocess translate layer
reshape ReshapeLayer Reshape tensor dimension layer
split SplitLayer Split layer
dropout DropOutLayer Dropout Layer
embedding EmbeddingLayer Embedding Layer
positional_encoding PositionalEncodingLayer Positional Encoding Layer
rnn RNNLayer Recurrent Layer
rnncell RNNCellLayer Recurrent Cell Layer
gru GRULayer Gated Recurrent Unit Layer
grucell GRUCellLayer Gated Recurrent Unit Cell Layer
lstm LSTMLayer Long Short-Term Memory Layer
lstmcell LSTMCellLayer Long Short-Term Memory Cell Layer
zoneoutlstmcell ZoneoutLSTMCellLayer Zoneout Long Short-Term Memory Cell Layer
time_dist TimeDistLayer Time distributed Layer
multi_head_attention MultiHeadAttentionLayer Multi Head Attention Layer

Supported Optimizers

NNTrainer Provides

Keyword Optimizer Name Description
sgd Stochastic Gradient Decent -
adam Adaptive Moment Estimation -
Keyword Leanring Rate Description
exponential exponential learning rate decay -
constant constant learning rate -
step step learning rate -

Supported Loss Functions

NNTrainer provides

Keyword Class Name Description
cross_sigmoid CrossEntropySigmoidLossLayer Cross entropy sigmoid loss layer
cross_softmax CrossEntropySoftmaxLossLayer Cross entropy softmax loss layer
constant_derivative ConstantDerivativeLossLayer Constant derivative loss layer
mse MSELossLayer Mean square error loss layer
kld KLDLossLayer Kullback-Leibler Divergence loss layer

Supported Activation Functions

NNTrainer provides

Keyword Loss Name Description
tanh tanh function set as layer property
sigmoid sigmoid function set as layer property
relu relu function set as layer propery
softmax softmax function set as layer propery

Tensor

Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized. Currently, we supports lazy calculation mode to reduce complexity for copying tensors during calculations.

Keyword Description
4D Tensor B, C, H, W
Add/sub/mul/div -
sum, average, argmax -
Dot, Transpose -
normalization, standardization -
save, read -

Others

NNTrainer provides

Keyword Loss Name Description
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_param & type

APIs

Currently, we provide C APIs for Tizen. C++ APIs are also provided for other platform. Java & C# APIs will be provided soon.

Getting Started

Instructions for installing NNTrainer.

Running Examples

Instructions for preparing NNTrainer for execution

Examples for NNTrainer

NNTrainer example for a variety of networks

Open Source License

The NNtrainer is an open source project released under the terms of the Apache License version 2.0.

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].