Overlappredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 106 (-63.82%)
Transformer-in-TransformerAn Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (-86.35%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (-43.69%)
mix3dMix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021 Oral)
Stars: ✭ 183 (-37.54%)
FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (-54.27%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+70.99%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+93.52%)
Unsupervisedrr[CVPR 2021 - Oral] UnsupervisedR&R: Unsupervised Point Cloud Registration via Differentiable Rendering
Stars: ✭ 43 (-85.32%)
Transformers-RLAn easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Stars: ✭ 107 (-63.48%)
Deepmappingcode/webpage for the DeepMapping project
Stars: ✭ 140 (-52.22%)
CupochRobotics with GPU computing
Stars: ✭ 225 (-23.21%)
enformer-pytorchImplementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: ✭ 146 (-50.17%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (-55.29%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+39.25%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+40.27%)
EqtransformerEQTransformer, a python package for earthquake signal detection and phase picking using AI.
Stars: ✭ 95 (-67.58%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-95.56%)
ProbregPython package for point cloud registration using probabilistic model (Coherent Point Drift, GMMReg, SVR, GMMTree, FilterReg, Bayesian CPD)
Stars: ✭ 306 (+4.44%)
Fast gicpA collection of GICP-based fast point cloud registration algorithms
Stars: ✭ 307 (+4.78%)
3d PointcloudPapers and Datasets about Point Cloud.
Stars: ✭ 179 (-38.91%)
Ppf FoldnetPyTorch reimplementation for "PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors" https://arxiv.org/abs/1808.10322
Stars: ✭ 51 (-82.59%)
YOHO[ACM MM 2022] You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors
Stars: ✭ 76 (-74.06%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (-58.7%)
NLP-paper🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-92.15%)
M3DETRCode base for M3DeTR: Multi-representation, Multi-scale, Mutual-relation 3D Object Detection with Transformers
Stars: ✭ 47 (-83.96%)
dodrioExploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (-20.48%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-28.67%)
CilantroA lean C++ library for working with point cloud data
Stars: ✭ 577 (+96.93%)
superpose3dregister 3D point clouds using rotation, translation, and scale transformations.
Stars: ✭ 34 (-88.4%)
CrabNetPredict materials properties using only the composition information!
Stars: ✭ 57 (-80.55%)
Image-CaptionUsing LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-87.71%)
PycpdPure Numpy Implementation of the Coherent Point Drift Algorithm
Stars: ✭ 255 (-12.97%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+1144.37%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-62.12%)
Transformer TtsA Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (+42.66%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (-59.39%)
Se3 Transformer PytorchImplementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Stars: ✭ 73 (-75.09%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+237.88%)
Ndt ompMulti-threaded and SSE friendly NDT algorithm
Stars: ✭ 291 (-0.68%)
Linear Attention TransformerTransformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: ✭ 205 (-30.03%)
DeepI2PDeepI2P: Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
Stars: ✭ 130 (-55.63%)
visualizationa collection of visualization function
Stars: ✭ 189 (-35.49%)
abcnn pytorchImplementation of ABCNN(Attention-Based Convolutional Neural Network) on Pytorch
Stars: ✭ 35 (-88.05%)
Transformer-ocrHandwritten text recognition using transformers.
Stars: ✭ 92 (-68.6%)
Universal Head 3DMMThis is a Project Page of 'Towards a complete 3D morphable model of the human head'
Stars: ✭ 138 (-52.9%)
SpinNet[CVPR 2021] SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration
Stars: ✭ 181 (-38.23%)
image-classificationA collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (-76.11%)
frustum-convnetThe PyTorch Implementation of F-ConvNet for 3D Object Detection
Stars: ✭ 228 (-22.18%)
towheeTowhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+180.2%)
mrivismedical image visualization library and development toolkit
Stars: ✭ 19 (-93.52%)
CMRNetCode for "CMRNet: Camera to LiDAR-Map Registration" (ITSC 2019) - WIP
Stars: ✭ 70 (-76.11%)
graphtransRepresenting Long-Range Context for Graph Neural Networks with Global Attention
Stars: ✭ 45 (-84.64%)