All Projects → tukilabs → Video-Compression-Net

tukilabs / Video-Compression-Net

Licence: other
A new approach to video compression by refining the shortcomings of conventional approach and substituting each traditional component with their neural network counterpart. Our proposed work consists of motion estimation, compression and compensation and residue compression, learned end-to-end to minimize the rate-distortion trade off. The whole…

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Video-Compression-Net

Awesome Tensorlayer
A curated list of dedicated resources and applications
Stars: ✭ 248 (+1140%)
Mutual labels:  autoencoder
tensorflow-mnist-AAE
Tensorflow implementation of adversarial auto-encoder for MNIST
Stars: ✭ 86 (+330%)
Mutual labels:  autoencoder
GATE
The implementation of "Gated Attentive-Autoencoder for Content-Aware Recommendation"
Stars: ✭ 65 (+225%)
Mutual labels:  autoencoder
Unsupervised-Classification-with-Autoencoder
Using Autoencoders for classification as unsupervised machine learning algorithms with Deep Learning.
Stars: ✭ 43 (+115%)
Mutual labels:  autoencoder
EZyRB
Easy Reduced Basis method
Stars: ✭ 49 (+145%)
Mutual labels:  autoencoder
dltf
Hands-on in-person workshop for Deep Learning with TensorFlow
Stars: ✭ 14 (-30%)
Mutual labels:  autoencoder
Pytorch Vae
A Variational Autoencoder (VAE) implemented in PyTorch
Stars: ✭ 237 (+1085%)
Mutual labels:  autoencoder
Continuous-Image-Autoencoder
Deep learning image autoencoder that not depends on image resolution
Stars: ✭ 20 (+0%)
Mutual labels:  autoencoder
eForest
This is the official implementation for the paper 'AutoEncoder by Forest'
Stars: ✭ 71 (+255%)
Mutual labels:  autoencoder
probabilistic nlg
Tensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
Stars: ✭ 28 (+40%)
Mutual labels:  autoencoder
pytorch integrated cell
Integrated Cell project implemented in pytorch
Stars: ✭ 40 (+100%)
Mutual labels:  autoencoder
adversarial-autoencoder
Tensorflow 2.0 implementation of Adversarial Autoencoders
Stars: ✭ 17 (-15%)
Mutual labels:  autoencoder
Face-Landmarking
Real time face landmarking using decision trees and NN autoencoders
Stars: ✭ 73 (+265%)
Mutual labels:  autoencoder
DESOM
🌐 Deep Embedded Self-Organizing Map: Joint Representation Learning and Self-Organization
Stars: ✭ 76 (+280%)
Mutual labels:  autoencoder
Dual-CNN-Models-for-Unsupervised-Monocular-Depth-Estimation
Dual CNN Models for Unsupervised Monocular Depth Estimation
Stars: ✭ 36 (+80%)
Mutual labels:  autoencoder
Link Prediction
Representation learning for link prediction within social networks
Stars: ✭ 245 (+1125%)
Mutual labels:  autoencoder
Image-Retrieval
Image retrieval program made in Tensorflow supporting VGG16, VGG19, InceptionV3 and InceptionV4 pretrained networks and own trained Convolutional autoencoder.
Stars: ✭ 56 (+180%)
Mutual labels:  autoencoder
peax
Peax is a tool for interactive visual pattern search and exploration in epigenomic data based on unsupervised representation learning with autoencoders
Stars: ✭ 63 (+215%)
Mutual labels:  autoencoder
topological-autoencoders
Code for the paper "Topological Autoencoders" by Michael Moor, Max Horn, Bastian Rieck, and Karsten Borgwardt.
Stars: ✭ 82 (+310%)
Mutual labels:  autoencoder
Unsupervised Deep Learning
Unsupervised (Self-Supervised) Clustering of Seismic Signals Using Deep Convolutional Autoencoders
Stars: ✭ 36 (+80%)
Mutual labels:  autoencoder

Video-Compression-Net

This project presents the Neural architecture to compress videos (sequence of image frames) along with the pre-trained models. Our work is inspired by DVC and we use tensorflow-compression for bitrate estimation and entropy compression. Compression is realized in terms of actual file size.

Citation

If you find our paper useful, please cite:

@inproceedings{dhungel2020efficient,
  title={An Efficient Video Compression Network},
  author={Dhungel, Prasanga and Tandan, Prashant and Bhusal, Sandesh and Neupane, Sobit and Shakya, Subama},
  booktitle={2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN)},
  pages={1028--1034},
  year={2020},
  organization={IEEE}

Installation

For installation, simply run the following command:

pip install -r requirements.txt

for GPU support, replace the tensorflow==1.15.0 line in requirements.txt with tensorflow-gpu==1.15.0 .

Note: Precompiled packages for tensorflow-compression are currently only provided for Linux (Python 2.7, 3.3-3.6) and Darwin/Mac OS (Python 2.7, 3.7). For windows please refer this.

Pre-trained Models:

Pre-trained models are available at checkpoints. The models suffixed with "msssim" are the ones that are optimized with MS-SSIM while the rest are optimized with PSNR. The Integer in the filename denotes the lambda (weight assigned to distortion compared to the bitrate). Higher the value of lambda lower will be the distortion and higher will be the bitrate.

Compression

Run the following command and follow the instructions:

python compress.py -h 

For example,

python compress.py -i demo/input/ -o demo/compressed/ -m checkpoints/videocompressor1024.pkl -f 101

The execution compresses the frames in demo/input/ to compressed files in demo/compressed/.

Note: Right now, our work is only applicable to the RGB frames of height and width, that are multiple of 16. Needless to say, higher resolution images require more time to train, compress and decompress

Reconstruction

Run the following command and follow the instructions:

python decompress.py -h 

For example,

python decompress.py -i demo/compressed/ -o demo/reconstructed -m checkpoints/videocompressor1024.pkl -f 101

The execution will reconstruct the original frames in demo/reconstructed/ with some compression artifacts.

Training your own model

We trained the network with vimeo-septuplet dataset.To download the dataset, run the script download_dataset.sh as:

sh download_dataset.sh

Here, we provide the small portion of the large dataset, to present the dataset outlook. You can train your own model by simply executing the following command and following the instructions:

python train.py -h 

For training the dataset structure should be same as vimeo-septuplet structure, otherwise you should write your own data-parser to the network.

Evaluation

We perform compression and reconstruction in a single file test.py for evaluation. To evaluate the compression and distortion, execute:

python test.py -h

and follow the instructions. For example,

python test.py -m checkpoints/videocompressor256-msssim.pkl

Experiments

Experimental results are available at the evaluation.

Demo


Note: The compression and reconstruction without GPU will be slower than the above demonstration.

Visualization


The images are first frame, second frame, optical flow, reconstructed optical flow, motion compensated frame, residue, reconstructed residue and reconstructed frame respectively.

Authors

Prasanga Dhungel
Prashant Tandan
Sandesh Bhusal
Sobit Neupane

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].