All Projects → thierrydumas → autoencoder_based_image_compression

thierrydumas / autoencoder_based_image_compression

Licence: other
Autoencoder based image compression: can the learning be quantization independent? https://arxiv.org/abs/1802.09371

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
HTML
75241 projects
TeX
3793 projects
c
50402 projects - #5 most used programming language
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to autoencoder based image compression

CAE-ADMM
CAE-ADMM: Implicit Bitrate Optimization via ADMM-Based Pruning in Compressive Autoencoders
Stars: ✭ 34 (+61.9%)
Mutual labels:  image-compression, autoencoders
ppq
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
Stars: ✭ 281 (+1238.1%)
Mutual labels:  quantization
amr
Official adversarial mixup resynthesis repository
Stars: ✭ 31 (+47.62%)
Mutual labels:  autoencoders
navec
Compact high quality word embeddings for Russian language
Stars: ✭ 118 (+461.9%)
Mutual labels:  quantization
imagezero
Fast Lossless Color Image Compression Library
Stars: ✭ 49 (+133.33%)
Mutual labels:  image-compression
zImageOptimizer
Simple image optimizer for JPEG, PNG and GIF images on Linux, MacOS and FreeBSD.
Stars: ✭ 108 (+414.29%)
Mutual labels:  image-compression
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+3071.43%)
Mutual labels:  quantization
xice7-imageKit
基于java语言实现的简单的图片处理
Stars: ✭ 23 (+9.52%)
Mutual labels:  image-compression
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (+233.33%)
Mutual labels:  quantization
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (+166.67%)
Mutual labels:  quantization
camalian
Library used to deal with colors and images. You can extract colors from images.
Stars: ✭ 45 (+114.29%)
Mutual labels:  quantization
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (+95.24%)
Mutual labels:  quantization
lixinger-openapi
理杏仁开发平台python api(非官方)
Stars: ✭ 43 (+104.76%)
Mutual labels:  quantization
Stochastic-Quantization
Training Low-bits DNNs with Stochastic Quantization
Stars: ✭ 70 (+233.33%)
Mutual labels:  quantization
api
docs.nekos.moe/
Stars: ✭ 31 (+47.62%)
Mutual labels:  image-compression
image-optimizer
A free and open source tool for optimizing images and vector graphics.
Stars: ✭ 740 (+3423.81%)
Mutual labels:  image-compression
DNNAC
All about acceleration and compression of Deep Neural Networks
Stars: ✭ 29 (+38.1%)
Mutual labels:  quantization
continuous Bernoulli
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (+4.76%)
Mutual labels:  autoencoders
qoix
Elixir implementation of the Quite OK Image format
Stars: ✭ 18 (-14.29%)
Mutual labels:  image-compression
autoencoders tensorflow
Automatic feature engineering using deep learning and Bayesian inference using TensorFlow.
Stars: ✭ 66 (+214.29%)
Mutual labels:  autoencoders

Autoencoder based image compression: can the learning be quantization independent?

This repository is a Tensorflow implementation of the paper "Autoencoder based image compression: can the learning be quantization independent?", ICASSP, 2018.

ICASSP 2018 paper | Project page with visualizations

The code is tested on Linux and Windows.

Prerequisites

  • Python (code tested using Python 2.7.9 and Python 3.6.3)
  • numpy (version >= 1.11.0)
  • tensorflow (optional GPU support), see TensorflowInstallationWebPage (for Python 2.7.9, the code was tested using Tensorflow 0.11.0; for Python 3.6.3, the code was tested using Tensorflow 1.4.0; the code must thus work using any Tensorflow 0.x or 1.x, x being the subversion index)
  • cython (code tested with cython 0.25.2)
  • matplotlib (code tested with matplotlib 1.5.3)
  • pillow (code tested with pillow 3.4.2)
  • scipy (code tested wth scipy 0.18.1)
  • six
  • glymur (code tested with Glymur 0.8.10), see GlymurWebPage
  • ImageMagick, see ImageMagickWebPage

Cloning the code

Clone this repository into the current folder.

git clone https://github.com/thierrydumas/autoencoder_based_image_compression.git
cd autoencoder_based_image_compression/kodak_tensorflow/

Compilation

  1. Compilation of the C++ lossless coder via Cython.
    cd lossless
    python setup.py build_ext --inplace
    cd ../
  2. Compilation of HEVC/H.265.
    • For Linux,
      cd hevc/HM-16.15/build/linux/
      make
      cd ../../../../
    • For Windows, use Visual Studio 2015 and the solution file at "hevc/HM-16.15/build/HM_vc2015.sln". For more information, see HEVCSoftwareWebPage.

Quick start: reproducing the main results of the paper

  1. Creation of the Kodak test set containing 24 luminance images.
    python creating_kodak.py
  2. Comparison of several trained autoencoders, JPEG2000, and H.265 in terms of rate-distortion on the Kodak test set.
    python reconstructing_eae_kodak.py
    After running Step 2, the reconstructions of the luminance images in the Kodak test set and the rates and the PSNRs associated to the compression of the luminance images via the trained autoencoders, JPEG2000, and H.265 are stored in the folder "eae/visualization/test/checking_reconstructing/kodak/".

Quick start: training an autoencoder

  1. First of all, ImageNet images must be downloaded. In our case, it is sufficient to download the ILSVRC2012 validation images, "ILSVRC2012_img_val.tar" (6.3 GB), see ImageNetDownloadWebPage. Let's say that, in your computer, the path to "ILSVRC2012_img_val.tar" is "path/to/folder_0/ILSVRC2012_img_val.tar" and you want the unpacked images to be put into the folder "path/to/folder_1/" before the script "creating_imagenet.py" preprocesses them. The creation of the ImageNet training and validaton sets of luminance images is then done via
    python creating_imagenet.py path/to/folder_1/ --path_to_tar=path/to/folder_0/ILSVRC2012_img_val.tar
  2. The training of an autoencoder on the ImageNet training set is done via the command below. 1.0 is the value of the quantization bin widths at the beginning of the training. 14000.0 is the value of the coefficient weighting the distortion term and the rate term in the objective function to be minimized over the parameters of the autoencoder. The script "training_eae_imagenet.py" enables to split the entire autoencoder training into several successive parts. The last argument 0 means that "training_eae_imagenet.py" runs the first part of the entire training. For each successive part, the last argument is incremented by 1.
    python training_eae_imagenet.py 1.0 14000.0 0

Full functionality

The documentation "documentation_kodak/documentation_code.html" describes all the functionalities of the code of the paper.

A simple example

Another piece of code is a simple example for introducing the code of the paper. This piece of code is stored in the folder "svhn". Its documentation is in the file "documentation_svhn/documentation_code.html". If you feel comfortable with autoencoders, this piece of code can be skipped. Its purpose is to clarify the training of a rate-distortion optimized autoencoder. That is why a simple rate-distortion optimized autoencoder with very few hidden units is trained on tiny images (32x32 SVHN digits).

Citing

@InProceedings{autoencoder_based_icassp2018,
  author = {Dumas, Thierry and Roumy, Aline and Guillemot, Christine},
  title = {Autoencoder based image compression: can the learning be quantization independent?},
  booktitle = {ICASSP},
  year = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].