All Projects → Liusifei → Pytorch_spn

Liusifei / Pytorch_spn

Extension package for spatial propagation network in pytorch.

Labels

Projects that are alternatives of or similar to Pytorch spn

Numer
Numeric Erlang - vector and matrix operations with CUDA. Heavily inspired by Pteracuda - https://github.com/kevsmith/pteracuda
Stars: ✭ 91 (-20.18%)
Mutual labels:  cuda
Pygraphistry
PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the GPU-accelerated Graphistry visual graph analyzer
Stars: ✭ 1,365 (+1097.37%)
Mutual labels:  cuda
Cuhe
CUDA Homomorphic Encryption Library
Stars: ✭ 109 (-4.39%)
Mutual labels:  cuda
Region Conv
Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade
Stars: ✭ 95 (-16.67%)
Mutual labels:  cuda
Dpp
Detail-Preserving Pooling in Deep Networks (CVPR 2018)
Stars: ✭ 99 (-13.16%)
Mutual labels:  cuda
Chamferdistancepytorch
Chamfer Distance in Pytorch with f-score
Stars: ✭ 105 (-7.89%)
Mutual labels:  cuda
Elasticfusion
Real-time dense visual SLAM system
Stars: ✭ 1,298 (+1038.6%)
Mutual labels:  cuda
Pytorch Unflow
a reimplementation of UnFlow in PyTorch that matches the official TensorFlow version
Stars: ✭ 113 (-0.88%)
Mutual labels:  cuda
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-13.16%)
Mutual labels:  cuda
Torch Mesh Isect
Stars: ✭ 107 (-6.14%)
Mutual labels:  cuda
Pynvvl
A Python wrapper of NVIDIA Video Loader (NVVL) with CuPy for fast video loading with Python
Stars: ✭ 95 (-16.67%)
Mutual labels:  cuda
Extending Jax
Extending JAX with custom C++ and CUDA code
Stars: ✭ 98 (-14.04%)
Mutual labels:  cuda
Dace
DaCe - Data Centric Parallel Programming
Stars: ✭ 106 (-7.02%)
Mutual labels:  cuda
Fbtt Embedding
This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as recommendation and natural language processing. We showed this library can reduce the total model size by up to 100x in Facebook’s open sourced DLRM model while achieving same model quality. Our implementation is faster than the state-of-the-art implementations. Existing the state-of-the-art library also decompresses the whole embedding tables on the fly therefore they do not provide memory reduction during runtime of the training. Our library decompresses only the requested rows therefore can provide 10,000 times memory footprint reduction per embedding table. The library also includes a software cache to store a portion of the entries in the table in decompressed format for faster lookup and process.
Stars: ✭ 92 (-19.3%)
Mutual labels:  cuda
Futhark
💥💻💥 A data-parallel functional programming language
Stars: ✭ 1,641 (+1339.47%)
Mutual labels:  cuda
Tutorial Ubuntu 18.04 Install Nvidia Driver And Cuda And Cudnn And Build Tensorflow For Gpu
Ubuntu 18.04 How to install Nvidia driver + CUDA + CUDNN + build tensorflow for gpu step by step command line
Stars: ✭ 91 (-20.18%)
Mutual labels:  cuda
Cuda Winograd
Fast CUDA Kernels for ResNet Inference.
Stars: ✭ 104 (-8.77%)
Mutual labels:  cuda
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (-0.88%)
Mutual labels:  cuda
Adacof Pytorch
Official source code for our paper "AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation" (CVPR 2020)
Stars: ✭ 110 (-3.51%)
Mutual labels:  cuda
Hashcat
World's fastest and most advanced password recovery utility
Stars: ✭ 11,014 (+9561.4%)
Mutual labels:  cuda

pytorch_spn

To build, install pytorch and run:

$ sh make.sh

See left_right_demo.py for usage:

$ mv left_right_demo.py ../

$ python left_right_demo.py

The original codes (caffe) and models will be relesed HERE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].