All Projects → agoncharenko1992 → FAT-fast-adjustable-threshold

agoncharenko1992 / FAT-fast-adjustable-threshold

Licence: other
This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to FAT-fast-adjustable-threshold

E2E-Object-Detection-in-TFLite
This repository shows how to train a custom detection model with the TFOD API, optimize it with TFLite, and perform inference with the optimized model.
Stars: ✭ 28 (+40%)
Mutual labels:  tflite
Drishti
Drishti is an open-source cross-platform mobile application project at Incubate Nepal that incorporates Machine Learning and Artificial Intelligence to help visually impaired people recognize different currency bills and perform daily cash transactions more effectively. We plan to expand Drishti to other applications like Short Text and Document…
Stars: ✭ 23 (+15%)
Mutual labels:  tflite
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+6050%)
Mutual labels:  tflite
Food-Ordering-Application-with-Review-Analyzer
A food ordering android application with feedback analyzer to improve food suggestions to customer.
Stars: ✭ 67 (+235%)
Mutual labels:  tflite
tensorflow-yolov4
YOLOv4 Implemented in Tensorflow 2.
Stars: ✭ 136 (+580%)
Mutual labels:  tflite
glDelegateBench
quick and dirty inference time benchmark for TFLite gles delegate
Stars: ✭ 17 (-15%)
Mutual labels:  tflite
react-native-camera-tflite
Real time image classification with React Native and Tensorflow lite.
Stars: ✭ 52 (+160%)
Mutual labels:  tflite
LIGHT-SERNET
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition
Stars: ✭ 20 (+0%)
Mutual labels:  tflite
backscrub
Virtual Video Device for Background Replacement with Deep Semantic Segmentation
Stars: ✭ 691 (+3355%)
Mutual labels:  tflite
glDelegateBenchmark
quick and dirty benchmark for TFLite gles delegate on iOS
Stars: ✭ 13 (-35%)
Mutual labels:  tflite
tflite-vx-delegate
Tensorflow Lite external delegate based on TIM-VX
Stars: ✭ 28 (+40%)
Mutual labels:  tflite
PyTorch-ONNX-TFLite
Conversion of PyTorch Models into TFLite
Stars: ✭ 189 (+845%)
Mutual labels:  tflite
mtomo
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Stars: ✭ 24 (+20%)
Mutual labels:  tflite
tflite native
A Dart interface to TensorFlow Lite (tflite) through dart:ffi
Stars: ✭ 127 (+535%)
Mutual labels:  tflite
E2E-tfKeras-TFLite-Android
End to end training MNIST image classifier with tf.Keras, convert to TFLite and deploy to Android
Stars: ✭ 17 (-15%)
Mutual labels:  tflite
CFU-Playground
Want a faster ML processor? Do it yourself! -- A framework for playing with custom opcodes to accelerate TensorFlow Lite for Microcontrollers (TFLM). . . . . . Online tutorial: https://google.github.io/CFU-Playground/ For reference docs, see the link below.
Stars: ✭ 361 (+1705%)
Mutual labels:  tflite
android tflite
GPU Accelerated TensorFlow Lite applications on Android NDK. Higher accuracy face detection, Age and gender estimation, Human pose estimation, Artistic style transfer
Stars: ✭ 105 (+425%)
Mutual labels:  tflite
TFLite-Mobile-Generic-Object-Localizer
Python TFLite scripts for detecting objects of any class in an image without knowing their label.
Stars: ✭ 42 (+110%)
Mutual labels:  tflite
Selfie2Anime-with-TFLite
How to create Selfie2Anime from tflite model to Android.
Stars: ✭ 70 (+250%)
Mutual labels:  tflite
Mobile Image-Video Enhancement
Sensifai image and video enhancement module on mobiles
Stars: ✭ 39 (+95%)
Mutual labels:  tflite

FAT (Fast Adjustable Thresholds) Arxiv: https://arxiv.org/abs/1812.07872

Table of Content

Requirements

The following libraries are required:

  • numpy - 1.14.4
  • opencv-python - 3.4.1.15
  • tensorflow-gpu - 1.8.0
  • tqdm - 4.28.1

If you want to use CPU instead of GPU replace tensorflow-gpu with tensorflow.

Version of libraries you use may differ from the versions we specify in requirements.

Train quantization thresholds for MNasNet

We provide support for the MNasNet models which are hosted at <www.tensorflow.org/lite/models>. Download any MNasNet model you are intrested in and extract a *.pb file from it.

Quantization of MNasNet models is performed in a TFLite manner, so the resulted structure of quantized models differs from the original.

Extract weights from an existing model

All MNasNet models hosted by TensorFlow have the same structure and differ in size of kernels of convolution layers. So the information about the model's weights is enough to restore the model itself.

To extract weights from the downloaded *.pb file use the following command:

$ python prepare_weights.py /path/to/the/mnasnet/model.pb

or if you use GPUs and want to specify which one will be used for calculations

$ CUDA_VISIBLE_DEVICES=0 python prepare_weights.py /path/to/the/mnasnet/model.pb

It will create a *.pickle file and place it in the same folder as the model:

/path/to/the/mnasnet/model_weights.pickle

You also can use the notebook [Prepare MNasNet weights.ipynb](Prepare MNasNet weights.ipynb)

Quantize and train the quantization thresholds of MNasNet models

We provide the notebook [Train Thresholds.ipynb](Train Thresholds.ipynb) to quantize MNasNet models and to train its quantization thresholds.

It utilizes the following classes:

You need to follow several steps:

You can find additional comments in the notebook

  1. Specify the path to the *.pickle file containing weights of the model you want to build.

  2. Specify the base output folder, where the checkpoints and model's adjusted threshold will be stored. The hierarchy of the output data is following:

    .specified_output_folder/
      |
      o -- model_name/
            |
            | -- ckpt/
            |     L ...
            |
            | -- best_ckpt/
            |     L ...
            |
            | -- model_thresholds.pickle    (will be created during training)
            o -- model_fakequant.pb    (will be created during training)
    
  3. Setup data generators

    1. Specify the paths to the base folders containing training and validation images
    2. Specify the paths to the lists containing the pairs (image name, image label). The format of the lists must be following:
      relative/path/to/image_1.JPEG label_1
      relative/path/to/image_2.JPEG label_2
      ...
      
      The paths to the images must be relative to the corresponding images folders.
  4. Set up calibration parameters (the number of calibration batches and the number of images per batch)

  5. Adjust parameters of the training. You need to modify settings_config/train.json.

    In the notebook Train Thresholds.ipynb we ignore such parameters as save_dir and best_ckpt_dir and override them with paths created exclusively for the MNasNet models.

  6. Run the model training.

    It can take a while, depending on the number of epochs and trainable images

  7. Use adjusted thresholds values to build a MNasNet model with fake quant nodes. The output model is compatible with TFLite.

    The output model is saved as a *.pb file. You need to use external tools (like tensorflow toco) to convert it to TFLite format.

Using DataGenerator

DataGenerator is an instrument that allows you to iterate over the images in a simple way.

During initialization DataGenerator expects two paths:

  • path to the folder containing images or other folders with images

  • path to the list of images with corresponding labels in the following format:

    relative/path/to/image_1.JPEG label_1
    relative/path/to/image_2.JPEG label_2
    ...
    

    Path to each image must be relative to the specified folder.

DataGenerator has a public method generate_batches(...) which creates a generator that iterates over the image dataset. This generator yields pairs (batch of images, batch of labels) so it can be used for calibration and validation as well.

See the DataGenerator source code for more details.

Training quantization thresholds

Class Trainer provides basic functionality to train the model by minimizing the difference between the output of the trained model and the output of the reference model.

Both the reference model and the trainable one must be in the same graph and have the same input node.

The following parameters are the most important for the training process and must be adjusted for each task individually:

  • learning_rate
  • learning_rate_decay
  • batch_size
  • epochs
  • reinit_adam_after_n_batches

We prefer to store these parameters in the external file settings_config/train.json. However, you can define these parameters wherever you want, just make sure you feed them to the __init__ method of the Trainer

After instantiating the Trainer class you will be able to invoke two main methods: train and validate. Each accept the session as an input argument and expect all variables to be initialized.

See the Trainer source code for more details.

Trained Quantized MNasNet models

We provide the quantized MNasNet models built with trained quantization thresholds. Names of the input and the output of the models are input_node and output_node correspondingly.

*.pb-file with fake quant nodes TFLite model TFLite model accuracy (Top 1, %)
mnasnet_0.5_224_quant.pb mnasnet_0.5_224_quant.tflite 66.6
mnasnet_0.75_224_quant.pb mnasnet_0.75_224_quant.tflite 70.11
mnasnet_1.0_128_quant.pb mnasnet_1.0_128_quant.tflite 66.76
mnasnet_1.0_224_quant.pb mnasnet_1.0_224_quant.tflite 72.45
mnasnet_1.3_224_quant.pb mnasnet_1.3_224_quant.tflite 74.74

Authors

  • Goncharenko Alexander
  • Denisov Andrey
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].