All Projects → rballester → tthresh

rballester / tthresh

Licence: LGPL-3.0 license
C++ compressor for multidimensional grid data using the Tucker decomposition

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to tthresh

NTFk.jl
Unsupervised Machine Learning: Nonnegative Tensor Factorization + k-means clustering
Stars: ✭ 36 (+2.86%)
Mutual labels:  tensor-decomposition, hosvd, tucker-decomposition
torchprune
A research library for pytorch-based neural network pruning, compression, and more.
Stars: ✭ 133 (+280%)
Mutual labels:  compression, tensor-decomposition
rbzip2
bzip2 for Ruby
Stars: ✭ 39 (+11.43%)
Mutual labels:  compression
paq8pxd
No description or website provided.
Stars: ✭ 55 (+57.14%)
Mutual labels:  compression
memscrimper
Code for the DIMVA 2018 paper: "MemScrimper: Time- and Space-Efficient Storage of Malware Sandbox Memory Dumps"
Stars: ✭ 25 (-28.57%)
Mutual labels:  compression
scikit tt
Tensor Train Toolbox
Stars: ✭ 52 (+48.57%)
Mutual labels:  tensor-decomposition
DNNAC
All about acceleration and compression of Deep Neural Networks
Stars: ✭ 29 (-17.14%)
Mutual labels:  compression
supersnappy
Dependency-free and performant Nim Snappy implementation.
Stars: ✭ 55 (+57.14%)
Mutual labels:  compression
zlib
Compression and decompression in the gzip and zlib formats
Stars: ✭ 32 (-8.57%)
Mutual labels:  compression
imagezero
Fast Lossless Color Image Compression Library
Stars: ✭ 49 (+40%)
Mutual labels:  compression
ikeapack
Compact data serializer/packer written in Go, intended to produce a cross-language usable format.
Stars: ✭ 18 (-48.57%)
Mutual labels:  compression
mmtf
The specification of the MMTF format for biological structures
Stars: ✭ 40 (+14.29%)
Mutual labels:  compression
AGD
[ICML2020] "AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks" by Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang
Stars: ✭ 98 (+180%)
Mutual labels:  compression
upload-compression-plugin
Compress and decompress files on https://upload.io/
Stars: ✭ 21 (-40%)
Mutual labels:  compression
charls
CharLS, a C++ JPEG-LS library implementation
Stars: ✭ 134 (+282.86%)
Mutual labels:  compression
hasmin
Hasmin - A Haskell CSS Minifier
Stars: ✭ 55 (+57.14%)
Mutual labels:  compression
MGARD
MGARD: MultiGrid Adaptive Reduction of Data
Stars: ✭ 16 (-54.29%)
Mutual labels:  compression
SSffmpegVideoOperation
This is a library of FFmpeg for android... 📸 🎞 🚑
Stars: ✭ 261 (+645.71%)
Mutual labels:  compression
pcc geo cnn v2
Improved Deep Point Cloud Geometry Compression
Stars: ✭ 55 (+57.14%)
Mutual labels:  compression
tednet
TedNet: A Pytorch Toolkit for Tensor Decomposition Networks
Stars: ✭ 45 (+28.57%)
Mutual labels:  tensor-decomposition

tthresh

TTHRESH: Tensor Compression for Multidimensional Visual Data

This is an open-source C++ implementation written by Rafael Ballester-Ripoll ([email protected]) of the compressor developed in TTHRESH: Tensor Compression for Multidimensional Visual Data (R. Ballester-Ripoll, P. Lindstrom and R. Pajarola). It is intended for Cartesian grid data of 3 or more dimensions, and leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to 3 and more dimensions.

If you use TTHRESH for a scientific publication, please cite one or both of these papers:

  • TTHRESH: Tensor Compression for Multidimensional Visual Data: @article{BLP:19, Author = {Ballester-Ripoll, Rafael and Lindstrom, Peter and Pajarola, Renato}, Journal = {IEEE Transaction on Visualization and Computer Graphics}, Keywords = {visualization, data compression, volume rendering, higher-order decompositions, tensor approximation}, Title = {TTHRESH: Tensor Compression for Multidimensional Visual Data}, Volume = {26}, Issue = {9}, Pages = {2891--2903}, Year = {2019}}
  • Lossy Volume Compression Using Tucker Truncation and Thresholding: @article{BP:15, year={2015}, issn={0178-2789}, journal={The Visual Computer}, title={Lossy volume compression using {T}ucker truncation and thresholding}, publisher={Springer Berlin Heidelberg}, keywords={Tensor approximation; Data compression; Higher-order decompositions; Tensor rank reduction; Multidimensional data encoding}, author={Ballester-Ripoll, Rafael and Pajarola, Renato}, pages={1--14}}

For more information on the Tucker transform and tensor-based volume compression, check out our slides.

Visual Example (click to enlarge)

"Isotropic fine" turbulence timestep (512x512x512, 32-bit float) from the Johns Hopkins Turbulence Database:

Download

git clone https://github.com/rballester/tthresh.git

(or as a zip file).

Compilation

Use CMake to generate an executable tthresh:

mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make

Usage

Compression:

tthresh -i <dataset> <options> -c <compressed dataset>

Decompression:

tthresh -c <compressed dataset> -o <decompressed dataset>

Compression + decompression (this will print both the compression rate and the achieved accuracy):

tthresh -i dataset <options> -c <compressed dataset> -o <decompressed dataset>

The target accuracy can be specified either as relative error (-e), RMSE (-r) or PSNR (-p).

Toy example:

A toy data set (a 3D sphere) is included in the data/ folder. You can test the compressor with it as follows:

tthresh -i data/3D_sphere_64_uchar.raw -t uchar -s 64 64 64 -p 30 -c data/comp.raw -o data/decomp.raw

Extra Features

  • Use -a to reconstruct only the data set's bounding box.
  • Use -k when compressing a file to skip its k leading bytes.
  • Use NumPy-like notation immediately after -o to decimate the data while decompression. For example, -o :: :: 0 will reconstruct only the first z-slice of a volume, -o ::2 ::2 ::2 will decompress only every other voxel along all dimensions, and -o ll4 ll4 ll4 will perform Lanczos downsampling by a factor of 4. Some result examples for x2 decimation:

To get more info on the available options, run tthresh -h.

Acknowledgments

This work was partially supported by the UZH Forschungskredit "Candoc", grant number FK-16-012. I also thank Enrique G. Paredes for his help with CMake compilation issues.

Why Tucker?

Tensor-based compression is non-local, in the sense that all compressed coefficients contribute to the reconstruction of each individual voxel (in contrast to e.g. wavelet transforms or JPEG for images, which uses a localized DCT transform). This can be computationally demanding but decorrelates the data at all spatial scales, which has several advantages:

  • Very competitive compression quality
  • Fine bit-rate granularity
  • Smooth degradation at high compression (in particular, no blocking artifacts or temporal glitches)
  • Ability to downsample in the compressed domain
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].