All Projects → vernamlab → Cuhe

vernamlab / Cuhe

Licence: mit
CUDA Homomorphic Encryption Library

Labels

Projects that are alternatives of or similar to Cuhe

Deeppipe2
Deep Learning library using GPU(CUDA/cuBLAS)
Stars: ✭ 90 (-17.43%)
Mutual labels:  cuda
Pynvvl
A Python wrapper of NVIDIA Video Loader (NVVL) with CuPy for fast video loading with Python
Stars: ✭ 95 (-12.84%)
Mutual labels:  cuda
Cuda Winograd
Fast CUDA Kernels for ResNet Inference.
Stars: ✭ 104 (-4.59%)
Mutual labels:  cuda
Matconvnet
MatConvNet: CNNs for MATLAB
Stars: ✭ 1,299 (+1091.74%)
Mutual labels:  cuda
Fbtt Embedding
This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as recommendation and natural language processing. We showed this library can reduce the total model size by up to 100x in Facebook’s open sourced DLRM model while achieving same model quality. Our implementation is faster than the state-of-the-art implementations. Existing the state-of-the-art library also decompresses the whole embedding tables on the fly therefore they do not provide memory reduction during runtime of the training. Our library decompresses only the requested rows therefore can provide 10,000 times memory footprint reduction per embedding table. The library also includes a software cache to store a portion of the entries in the table in decompressed format for faster lookup and process.
Stars: ✭ 92 (-15.6%)
Mutual labels:  cuda
Extending Jax
Extending JAX with custom C++ and CUDA code
Stars: ✭ 98 (-10.09%)
Mutual labels:  cuda
Weighted softmax loss
Weighted Softmax Loss Layer for Caffe
Stars: ✭ 89 (-18.35%)
Mutual labels:  cuda
Hashcat
World's fastest and most advanced password recovery utility
Stars: ✭ 11,014 (+10004.59%)
Mutual labels:  cuda
Region Conv
Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade
Stars: ✭ 95 (-12.84%)
Mutual labels:  cuda
Pygraphistry
PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the GPU-accelerated Graphistry visual graph analyzer
Stars: ✭ 1,365 (+1152.29%)
Mutual labels:  cuda
Elasticfusion
Real-time dense visual SLAM system
Stars: ✭ 1,298 (+1090.83%)
Mutual labels:  cuda
Numer
Numeric Erlang - vector and matrix operations with CUDA. Heavily inspired by Pteracuda - https://github.com/kevsmith/pteracuda
Stars: ✭ 91 (-16.51%)
Mutual labels:  cuda
Dpp
Detail-Preserving Pooling in Deep Networks (CVPR 2018)
Stars: ✭ 99 (-9.17%)
Mutual labels:  cuda
Aurora
Minimal Deep Learning library is written in Python/Cython/C++ and Numpy/CUDA/cuDNN.
Stars: ✭ 90 (-17.43%)
Mutual labels:  cuda
Chamferdistancepytorch
Chamfer Distance in Pytorch with f-score
Stars: ✭ 105 (-3.67%)
Mutual labels:  cuda
Halloc
A fast and highly scalable GPU dynamic memory allocator
Stars: ✭ 89 (-18.35%)
Mutual labels:  cuda
Supra
SUPRA: Software Defined Ultrasound Processing for Real-Time Applications - An Open Source 2D and 3D Pipeline from Beamforming to B-Mode
Stars: ✭ 96 (-11.93%)
Mutual labels:  cuda
Torch Mesh Isect
Stars: ✭ 107 (-1.83%)
Mutual labels:  cuda
Dace
DaCe - Data Centric Parallel Programming
Stars: ✭ 106 (-2.75%)
Mutual labels:  cuda
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-9.17%)
Mutual labels:  cuda

cuHE: Homomorphic and fast

CUDA Homomorphic Encryption Library (cuHE) is a GPU-accelerated library for homomorphic encryption (HE) schemes and homomorphic algorithms defined over polynomial rings. cuHE yields an astonishing performance while providing a simple interface that greatly enhances programmer productivity. It features both algebraic techniques for homomorphic evalution of circuits and highly optimized code for single-GPU or multi-GPU machines. Develop high-performance applications rapidly with cuHE!

The cuHE library is distributed under the terms of the The MIT License (MIT). It is currently created for research purpose only. Several algorithms are implemented as examples and more will follow. Feedback and collaboration of any kind are welcomed.

Features

The library pushes performance to a limit. A number of optimizations such as algebraic techniques for efficient evaluation, memory minimization techniques, memory and stream scheduling and low level CUDA hand-tuned assembly optimizations are included to take full advantage of the mass parallelism and high memory bandwidth GPUs offer. The arithmetic functions constructed to handle very large polynomial operands adopt the Chinese remainder theorem (CRT), the number-theoretic transform (NTT) and Barrett reduction based methods. A few algorithms and routines of the library is described in this paper, along with a performance analysis. More details on arithmetic methods and optimizations regarding HE are explained in our previous papers listed below.

  1. Dai, Wei, and Berk Sunar. "cuHE: A Homomorphic Encryption Accelerator Library." Cryptography and Information Security in the Balkans. Springer International Publishing, 2015. 169-186. [draft] [Springer]

  2. Dai, Wei, Yarkın Doröz, and Berk Sunar. "Accelerating NTRU based homomorphic encryption using GPUs." High Performance Extreme Computing Conference (HPEC), 2014 IEEE. IEEE, 2014. [draft] [IEEE Xplore]

  3. Dai, Wei, Yarkın Doröz, and Berk Sunar. "Accelerating SWHE Based PIRs Using GPUs." Financial Cryptography and Data Security: FC 2015 International Workshops, BITCOIN, WAHC, and Wearable, San Juan, Puerto Rico, January 30, 2015, Revised Selected Papers. Vol. 8976. Springer, 2015. [draft] [Springer]

Examples

Currently available is an implementation of the Doröz-Hu-Sunar (DHS) somewhat homomorphic encryption (SHE) scheme based on the Lopez-Tromer-Vaikuntanathan (LTV) scheme. Several homomorphic applications built on DHS are implemented on GPUs and are included as examples, such as the Prince block cipher and a sorting algorithm. These examples give an idea of how to program with the cuHE library.

System Requirements

  1. NVIDIA CUDA-Enabled GPUs with computation compability 3.0 or higher
  2. NTL: A Library for doing Number Theory 9.3.0 (requires C++11) NOTE: to avoid random crashes compile it running ./configure NTL_EXCEPTIONS=on
  3. The OpenMP API

Compile

cd cuhe
cmake ./
make

options to cmake command defaults are:

-DGPU_ARCH:STRING=50
-DGCC_CUDA_VERSION:STRING=gcc-4.9

Notes for Mac OS X

On Mac you must use clang instead of gcc. You need to install a version compatible with OpenMP. With brew you can

brew install clang-omp

Then you must tell Cmake and Cuda that you are using clang-omp

cd cuhe
CC=clang-omp CXX=clang-omp++ cmake -DGCC_CUDA_VERSION=clang-omp ./
make

A Short Tutorial

To design/implement a homomorphic application/circuit, e.x. the AND of 8 bits. First of all, we need to decide which homomorphic encryption scheme to adopt and set parameters (polynomial ring degree, coefficient sizes in each level of circuit, relinearization strategy) according to some noise analysis process. Let's say we decide to adopt the DHS HE scheme.

#include "cuHE.h"
void setParameters(int d, int p, int w, int min, int cut, int m);//in "CuHE.h", set parameters
void initCuHE(ZZ *coeffMod_, ZZX modulus); //in "CuHE.h", start pre-computation on GPUs

Then we may process some pre-computation of the circuit. When it is time to run the circuit, we suggest to turn on our virtual allocator. Do not turn it off until the circuit is completely done.

void startAllocator(); //in "CuHE.h", start virtual allocator
void stopAllocator(); //in "CuHE.h", stop virtual allocator

The program by default uses a single GPU (device ID 0). To adopt multiple devices, call the function below.

void multiGPUs(int num); //adopt 'num' GPUs

Those are all the initialization steps. To implement any HE scheme or circuit, please check out the provided examples.

Developer

The cuHE library is developed and maintained by Wei Dai from the Vernam Group at Worcester Polytechnic Institute.

Acknowledgment

Funding for this research was in part provided by the US National Science Foundation CNS Award #1117590 and #1319130.

We want to acknowledge Andrea Peruffo for improving and debugging the code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].