All Projects → weissenberger → gpuhd

weissenberger / gpuhd

Licence: LGPL-3.0 license
Massively Parallel Huffman Decoding on GPUs

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
Makefile
30231 projects

Projects that are alternatives of or similar to gpuhd

Huffman-Coding
A C++ compression program based on Huffman's lossless compression algorithm and decoder.
Stars: ✭ 81 (+170%)
Mutual labels:  huffman, decompression, huffman-coding, huffman-decoder
CARE
CHAI and RAJA provide an excellent base on which to build portable codes. CARE expands that functionality, adding new features such as loop fusion capability and a portable interface for many numerical algorithms. It provides all the basics for anyone wanting to write portable code.
Stars: ✭ 22 (-26.67%)
Mutual labels:  gpu-acceleration, gpu-computing, gpu-programming
Apriori-and-Eclat-Frequent-Itemset-Mining
Implementation of the Apriori and Eclat algorithms, two of the best-known basic algorithms for mining frequent item sets in a set of transactions, implementation in Python.
Stars: ✭ 36 (+20%)
Mutual labels:  gpu-acceleration, gpu-programming
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (+230%)
Mutual labels:  gpu-acceleration, gpu-computing
runtime
AnyDSL Runtime Library
Stars: ✭ 17 (-43.33%)
Mutual labels:  gpu-acceleration, gpu-computing
Cekirdekler
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (+153.33%)
Mutual labels:  gpu-acceleration, gpu-computing
Emu
The write-once-run-anywhere GPGPU library for Rust
Stars: ✭ 1,350 (+4400%)
Mutual labels:  gpu-acceleration, gpu-computing
Clojurecuda
Clojure library for CUDA development
Stars: ✭ 158 (+426.67%)
Mutual labels:  gpu-acceleration, gpu-computing
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (+1040%)
Mutual labels:  gpu-acceleration, gpu-computing
huffman-coding
A C++ compression and decompression program based on Huffman Coding.
Stars: ✭ 31 (+3.33%)
Mutual labels:  decompression, huffman-coding
Gpufit
GPU-accelerated Levenberg-Marquardt curve fitting in CUDA
Stars: ✭ 174 (+480%)
Mutual labels:  gpu-acceleration, gpu-computing
raisin
A simple lightweight set of implementations and bindings for compression algorithms written in Go.
Stars: ✭ 17 (-43.33%)
Mutual labels:  huffman, decompression
ArithmeticEncodingPython
Data Compression using Arithmetic Encoding in Python
Stars: ✭ 37 (+23.33%)
Mutual labels:  data-compression, entropy-coding
Heteroflow
Concurrent CPU-GPU Programming using Task Models
Stars: ✭ 57 (+90%)
Mutual labels:  gpu-acceleration, gpu-computing
Stdgpu
stdgpu: Efficient STL-like Data Structures on the GPU
Stars: ✭ 531 (+1670%)
Mutual labels:  gpu-acceleration, gpu-computing
Pysnn
Efficient Spiking Neural Network framework, built on top of PyTorch for GPU acceleration
Stars: ✭ 129 (+330%)
Mutual labels:  gpu-acceleration, gpu-computing
rbcuda
CUDA bindings for Ruby
Stars: ✭ 57 (+90%)
Mutual labels:  gpu-acceleration, gpu-computing
Vuh
Vulkan compute for people
Stars: ✭ 264 (+780%)
Mutual labels:  gpu-acceleration, gpu-computing
Montecarlomeasurements.jl
Propagation of distributions by Monte-Carlo sampling: Real number types with uncertainty represented by samples.
Stars: ✭ 168 (+460%)
Mutual labels:  gpu-acceleration, gpu-computing
decompress
Pure OCaml implementation of Zlib.
Stars: ✭ 103 (+243.33%)
Mutual labels:  huffman, decompression

CUHD - A Massively Parallel Huffman Decoder

A Huffman decoder for processing raw (i.e. unpartitioned) Huffman encoded data on the GPU. It also includes a basic, sequential encoder.

For further information, please refer to our conference paper.

Requirements

  • CUDA-enabled GPU with compute capability 3.0 or higher
  • GNU/Linux
  • GNU compiler version 5.4.0 or higher
  • CUDA SDK 8 or higher
  • latest proprietary graphics drivers

Compilation process

Configuration

Please edit the Makefile:

  1. Set CUDA_INCLUDE to the include directory of your CUDA installation, e.g.: CUDA_INCLUDE = /usr/local/cuda-9.1/include

  2. Set CUDA_LIB to the library directory of your CUDA installation, e.g.: CUDA_LIB = /usr/local/cuda-9.1/lib64

  3. Set ARCH to the compute capability of your GPU, i.e. ARCH = 35 for compute capability 3.5. If you'd like to compile the decoder for multiple generations of GPUs, please edit NVCC_FLAGS accordingly.

Test program

The test program will generate a chunk of random, binomially distributed data, encode the data with a specified maximum codeword length and decode the data on the GPU.

Compiling the test program

To compile the test program, configure the Makefile as described above. Run:

make

Running the test program

./bin/demo <compute device index> <size of input in megabytes>

Compiling a static library

To compile a static library, run:

make lib

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].