All Projects → tensorflow → Compression

tensorflow / Compression

Licence: apache-2.0
Data compression in TensorFlow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Compression

Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (-1.09%)
Mutual labels:  deep-neural-networks, compression
Tfmesos
Tensorflow in Docker on Mesos #tfmesos #tensorflow #mesos
Stars: ✭ 194 (-57.64%)
Mutual labels:  deep-neural-networks, ml
Deephyper
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-74.45%)
Mutual labels:  deep-neural-networks, ml
Dltk
Deep Learning Toolkit for Medical Image Analysis
Stars: ✭ 1,249 (+172.71%)
Mutual labels:  deep-neural-networks, ml
Compressai
A PyTorch library and evaluation platform for end-to-end compression research
Stars: ✭ 246 (-46.29%)
Mutual labels:  deep-neural-networks, compression
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+178.6%)
Mutual labels:  deep-neural-networks, ml
Andrew Ng Notes
This is Andrew NG Coursera Handwritten Notes.
Stars: ✭ 180 (-60.7%)
Mutual labels:  deep-neural-networks, ml
Mnn
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
Stars: ✭ 6,284 (+1272.05%)
Mutual labels:  deep-neural-networks, ml
Darkon
Toolkit to Hack Your Deep Learning Models
Stars: ✭ 231 (-49.56%)
Mutual labels:  deep-neural-networks, ml
Ml Examples
Arm Machine Learning tutorials and examples
Stars: ✭ 207 (-54.8%)
Mutual labels:  deep-neural-networks, ml
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework.
Stars: ✭ 8,409 (+1736.03%)
Mutual labels:  deep-neural-networks, ml
Caffe
Caffe for Sparse and Low-rank Deep Neural Networks
Stars: ✭ 339 (-25.98%)
Mutual labels:  deep-neural-networks, compression
Ludwig
Data-centric declarative deep learning framework
Stars: ✭ 8,018 (+1650.66%)
Mutual labels:  deep-neural-networks, ml
Onnx
Open standard for machine learning interoperability
Stars: ✭ 11,829 (+2482.75%)
Mutual labels:  deep-neural-networks, ml
Skater
Python Library for Model Interpretation/Explanations
Stars: ✭ 973 (+112.45%)
Mutual labels:  deep-neural-networks, ml
Djl
An Engine-Agnostic Deep Learning Framework in Java
Stars: ✭ 2,262 (+393.89%)
Mutual labels:  deep-neural-networks, ml
Serving
A flexible, high-performance serving system for machine learning models
Stars: ✭ 5,306 (+1058.52%)
Mutual labels:  deep-neural-networks, ml
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (+39.74%)
Mutual labels:  deep-neural-networks, ml
Oneflow
OneFlow is a performance-centered and open-source deep learning framework.
Stars: ✭ 2,868 (+526.2%)
Mutual labels:  deep-neural-networks, ml
Tensorflow
An Open Source Machine Learning Framework for Everyone
Stars: ✭ 161,335 (+35125.98%)
Mutual labels:  deep-neural-networks, ml

TensorFlow Compression

TensorFlow Compression (TFC) contains data compression tools for TensorFlow.

You can use this library to build your own ML models with end-to-end optimized data compression built in. It's useful to find storage-efficient representations of your data (images, features, examples, etc.) while only sacrificing a tiny fraction of model performance. It can compress any floating point tensor to a much smaller sequence of bits.

Specifically, the entropy model classes in this library simplify the process of designing rate–distortion optimized codes. During training, they act like likelihood models. Once training is completed, they encode floating point tensors into optimal bit sequences by automating the design of probability tables and calling a range coder implementation behind the scenes.

The main novelty of this method over traditional transform coding is the stochastic minimization of the rate-distortion Lagrangian, and using nonlinear transforms implemented by neural networks. For an introduction to this, consider our paper on nonlinear transform coding, or watch @jonycgn's talk on learned image compression.

Documentation & getting help

Please post all questions or comments in our Google group. Only file Github issues for actual bugs or feature requests. If you post to the group instead, you may get a faster answer, and you help other people find the question or answer more easily later.

Refer to the API documentation for a complete description of the classes and functions this package implements.

Installation

Note: Precompiled packages are currently only provided for Linux and Darwin/Mac OS and Python 3.6-3.8. To use these packages on Windows, consider using a TensorFlow Docker image and installing TensorFlow Compression using pip inside the Docker container.

Set up an environment in which you can install precompiled binary Python packages using the pip command. Refer to the TensorFlow installation instructions for more information on how to set up such a Python environment.

The current version of TensorFlow Compression requires TensorFlow 2. For versions compatible with TensorFlow 1, see our previous releases. Note: Because TFC currently relies on features and fixes designated for TF 2.5, the pip package currently depends on tf-nightly packages. Once TF 2.5 is released (likely in April 2021), we will resume depending on the stable version of TF.

pip

To install TFC via pip, run the following command:

pip install tensorflow-compression

To test that the installation works correctly, you can run the unit tests with:

python -m tensorflow_compression.all_tests

Once the command finishes, you should see a message OK (skipped=29) or similar in the last line.

Docker

To use a Docker container (e.g. on Windows), be sure to install Docker (e.g., Docker Desktop), use a TensorFlow Docker image, and then run the pip install command inside the Docker container, not on the host. For instance, you can use a command line like this:

docker run tensorflow/tensorflow:nightly bash -c \
    "pip install tensorflow-compression &&
     python -m tensorflow_compression.all_tests"

This will fetch the TensorFlow Docker image if it's not already cached, install the pip package and then run the unit tests to confirm that it works.

Anaconda

It seems that Anaconda ships its own binary version of TensorFlow which is incompatible with our pip package. To solve this, always install TensorFlow via pip rather than conda. For example, this creates an Anaconda environment with Python 3.8 and CUDA libraries, and then installs TensorFlow and TensorFlow Compression:

conda create --name ENV_NAME python=3.8 cudatoolkit=10.0 cudnn
conda activate ENV_NAME
pip install tensorflow-compression

Usage

We recommend importing the library from your Python code as follows:

import tensorflow as tf
import tensorflow_compression as tfc

Using a pre-trained model to compress an image

In the models directory, you'll find a python script tfci.py. Download the file and run:

python tfci.py -h

This will give you a list of options. Briefly, the command

python tfci.py compress <model> <PNG file>

will compress an image using a pre-trained model and write a file ending in .tfci. Execute python tfci.py models to give you a list of supported pre-trained models. The command

python tfci.py decompress <TFCI file>

will decompress a TFCI file and write a PNG file. By default, an output file will be named like the input file, only with the appropriate file extension appended (any existing extensions will not be removed).

Training your own model

The models directory contains several implementations of published image compression models to enable easy experimentation. The instructions below talk about a re-implementation of the model published in:

"End-to-end optimized image compression"
J. Ballé, V. Laparra, E. P. Simoncelli
https://arxiv.org/abs/1611.01704

Note that the models directory is not contained in the pip package. The models are meant to be downloaded individually. Download the file bls2017.py and run:

python bls2017.py -h

This will list the available command line options for the implementation. Training can be as simple as the following command:

python bls2017.py -V train

This will use the default settings. Note that unless a custom training dataset is provided via --train_glob, the CLIC dataset will be downloaded using TensorFlow Datasets.

The most important training parameter is --lambda, which controls the trade-off between bitrate and distortion that the model will be optimized for. The number of channels per layer is important, too: models tuned for higher bitrates (or, equivalently, lower distortion) tend to require transforms with a greater approximation capacity (i.e. more channels), so to optimize performance, you want to make sure that the number of channels is large enough (or larger). This is described in more detail in:

"Efficient nonlinear transforms for lossy image compression"
J. Ballé
https://arxiv.org/abs/1802.00847

If you wish, you can monitor progress with Tensorboard. To do this, create a Tensorboard instance in the background before starting the training, then point your web browser to port 6006 on your machine:

tensorboard --logdir=/tmp/train_bls2017 &

When training has finished, the Python script saves the trained model to the directory specified with --model_path (by default, bls2017 in the current directory) in TensorFlow's SavedModel format. The script can then be used to compress and decompress images as follows. The same saved model must be accessible to both commands.

python bls2017.py [options] compress original.png compressed.tfci
python bls2017.py [options] decompress compressed.tfci reconstruction.png

Building pip packages

This section describes the necessary steps to build your own pip packages of TensorFlow Compression. This may be necessary to install it on platforms for which we don't provide precompiled binaries (currently only Linux and Darwin).

We use the custom-op Docker images (e.g. tensorflow/tensorflow:nightly-custom-op-ubuntu16) for building pip packages for Linux. Note that this is different from tensorflow/tensorflow:devel. To be compatible with the TensorFlow pip package, the GCC version must match, but tensorflow/tensorflow:devel has a different GCC version installed. For more information, refer to the custom-op instructions.

Inside a Docker container from the image, the following steps need to be taken.

  1. Clone the tensorflow/compression repo from GitHub.
  2. Run :build_pip_pkg inside the cloned repo.

For example:

sudo docker run -v /tmp/tensorflow_compression:/tmp/tensorflow_compression \
    tensorflow/tensorflow:nightly-custom-op-ubuntu16 bash -c \
    "git clone https://github.com/tensorflow/compression.git
         /tensorflow_compression &&
     cd /tensorflow_compression &&
     bazel run -c opt --copt=-mavx :build_pip_pkg"

The wheel file is created inside /tmp/tensorflow_compression. Optimization flags can be passed via --copt to the bazel run command above.

To test the created package, first install the resulting wheel file:

pip install /tmp/tensorflow_compression/tensorflow_compression-*.whl

Then run the unit tests (Do not run the tests in the workspace directory where the WORKSPACE file lives. In that case, the Python interpreter would attempt to import tensorflow_compression packages from the source tree, rather than from the installed package system directory):

pushd /tmp
python -m tensorflow_compression.all_tests
popd

When done, you can uninstall the pip package again:

pip uninstall tensorflow-compression

To build packages for Darwin (and potentially other platforms), you can follow the same steps, but the Docker image should not be necessary.

Evaluation

We provide evaluation results for several image compression methods in terms of different metrics in different colorspaces. Please see the results subdirectory for more information.

Authors

Note that this is not an officially supported Google product.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].