All Projects → Restormer → Similar Projects or Alternatives

412 Open source projects that are alternatives of or similar to Restormer

MusicTransformer-Pytorch
MusicTransformer written for MaestroV2 using the Pytorch framework for music generation
Stars: ✭ 106 (-81.91%)
Mutual labels:  transformer
LightFieldReconstruction
High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction
Stars: ✭ 50 (-91.47%)
Mutual labels:  image-restoration
wxml-transformer
将微信小程序的wxml代码转换成js object或html片段
Stars: ✭ 18 (-96.93%)
Mutual labels:  transformer
laravel-scene
Laravel Transformer
Stars: ✭ 27 (-95.39%)
Mutual labels:  transformer
Conformer
Official code for Conformer: Local Features Coupling Global Representations for Visual Recognition
Stars: ✭ 345 (-41.13%)
Mutual labels:  transformer
text2keywords
Trained T5 and T5-large model for creating keywords from text
Stars: ✭ 53 (-90.96%)
Mutual labels:  transformer
Color-Image-Inpainting
Image inpainting based on OMP and KSVD algorithm
Stars: ✭ 66 (-88.74%)
Mutual labels:  image-restoration
tf2-transformer-chatbot
Transformer Chatbot in TensorFlow 2 with TPU support.
Stars: ✭ 94 (-83.96%)
Mutual labels:  transformer
multi-task-defocus-deblurring-dual-pixel-nimat
Reference github repository for the paper "Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning". We propose a single-image deblurring network that incorporates the two sub-aperture views into a multitask framework. Specifically, we show that jointly learning to predict the two DP views from a single …
Stars: ✭ 29 (-95.05%)
Mutual labels:  defocus-deblurring
LaTeX-OCR
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Stars: ✭ 1,566 (+167.24%)
Mutual labels:  transformer
Context-Transformer
Context-Transformer: Tackling Object Confusion for Few-Shot Detection, AAAI 2020
Stars: ✭ 89 (-84.81%)
Mutual labels:  transformer
SieNet-Image-extrapolation
SiENet: Siamese Expansion Network for Image Extrapolation(IEEE SPL2020)
Stars: ✭ 42 (-92.83%)
Mutual labels:  image-restoration
UWCNN
Code and Datasets for "Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement", Pattern Recognition, 2019
Stars: ✭ 82 (-86.01%)
Mutual labels:  image-restoration
golgotha
Contextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (-93.34%)
Mutual labels:  transformer
paccmann proteomics
PaccMann models for protein language modeling
Stars: ✭ 28 (-95.22%)
Mutual labels:  transformer
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+40.1%)
Mutual labels:  transformer
ICON
(TPAMI2022) Salient Object Detection via Integrity Learning.
Stars: ✭ 125 (-78.67%)
Mutual labels:  transformer
Vision-Language-Transformer
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)
Stars: ✭ 127 (-78.33%)
Mutual labels:  transformer
SegFormer
Official PyTorch implementation of SegFormer
Stars: ✭ 1,264 (+115.7%)
Mutual labels:  transformer
transform-graphql
⚙️ Transformer function to transform GraphQL Directives. Create model CRUD directive for example
Stars: ✭ 23 (-96.08%)
Mutual labels:  transformer
semantic-segmentation
SOTA Semantic Segmentation Models in PyTorch
Stars: ✭ 464 (-20.82%)
Mutual labels:  transformer
CSV2RDF
Streaming, transforming, SPARQL-based CSV to RDF converter. Apache license.
Stars: ✭ 48 (-91.81%)
Mutual labels:  transformer
TDRG
Transformer-based Dual Relation Graph for Multi-label Image Recognition. ICCV 2021
Stars: ✭ 32 (-94.54%)
Mutual labels:  transformer
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+109.9%)
Mutual labels:  transformer
learningspoons
nlp lecture-notes and source code
Stars: ✭ 29 (-95.05%)
Mutual labels:  transformer
R2Net
Pytorch code for "Attention Based Real Image Restoration", IEEE Transactions on Neural Networks and Learning Systems, 2021
Stars: ✭ 19 (-96.76%)
Mutual labels:  image-deraining
THREE.Highres
High resolution and depth rendering to PNG for Three.js
Stars: ✭ 28 (-95.22%)
Mutual labels:  high-resolution
SReT
Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"
Stars: ✭ 51 (-91.3%)
Mutual labels:  efficient-transformers
Walk-Transformer
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Stars: ✭ 26 (-95.56%)
Mutual labels:  transformer
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (-58.19%)
Mutual labels:  transformer
En-transformer
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (-77.65%)
Mutual labels:  transformer
sparse-deconv-py
Official Python implementation of the 'Sparse deconvolution'-v0.3.0
Stars: ✭ 18 (-96.93%)
Mutual labels:  image-restoration
Super-Resolution-Meta-Attention-Networks
Open source single image super-resolution toolbox containing various functionality for training a diverse number of state-of-the-art super-resolution models. Also acts as the companion code for the IEEE signal processing letters paper titled 'Improving Super-Resolution Performance using Meta-Attention Layers’.
Stars: ✭ 17 (-97.1%)
Mutual labels:  image-restoration
DolboNet
Русскоязычный чат-бот для Discord на архитектуре Transformer
Stars: ✭ 53 (-90.96%)
Mutual labels:  transformer
Highway-Transformer
[ACL‘20] Highway Transformer: A Gated Transformer.
Stars: ✭ 26 (-95.56%)
Mutual labels:  transformer
text-style-transfer-benchmark
Text style transfer benchmark
Stars: ✭ 56 (-90.44%)
Mutual labels:  transformer
DudeNet
Designing and Training of A Dual CNN for Image Denoising (Knowledge-based Systems, 2021)
Stars: ✭ 45 (-92.32%)
Mutual labels:  low-level-vision
graph-transformer-pytorch
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2
Stars: ✭ 81 (-86.18%)
Mutual labels:  transformer
Zero-Shot-TTS
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration
Stars: ✭ 33 (-94.37%)
Mutual labels:  transformer
DMENet
[CVPR 2019] Official TensorFlow Implementation for "Deep Defocus Map Estimation using Domain Adaptation"
Stars: ✭ 86 (-85.32%)
Mutual labels:  defocus-deblurring
hyperstyle
Official Implementation for "HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing" (CVPR 2022) https://arxiv.org/abs/2111.15666
Stars: ✭ 874 (+49.15%)
Mutual labels:  cvpr2022
cometa
Corpus of Online Medical EnTities: the cometA corpus
Stars: ✭ 31 (-94.71%)
Mutual labels:  transformer
DeepPhonemizer
Grapheme to phoneme conversion with deep learning.
Stars: ✭ 152 (-74.06%)
Mutual labels:  transformer
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (-96.08%)
Mutual labels:  transformer
svelte-jest
Jest Svelte component transformer
Stars: ✭ 37 (-93.69%)
Mutual labels:  transformer
TS-CAM
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.
Stars: ✭ 96 (-83.62%)
Mutual labels:  transformer
ClusterTransformer
Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
Stars: ✭ 36 (-93.86%)
Mutual labels:  transformer
pytorch-gpt-x
Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
Stars: ✭ 21 (-96.42%)
Mutual labels:  transformer
rx-scheduler-transformer
rxjava scheduler transformer tools for android
Stars: ✭ 15 (-97.44%)
Mutual labels:  transformer
Relation-Extraction-Transformer
NLP: Relation extraction with position-aware self-attention transformer
Stars: ✭ 63 (-89.25%)
Mutual labels:  transformer
TitleStylist
Source code for our "TitleStylist" paper at ACL 2020
Stars: ✭ 72 (-87.71%)
Mutual labels:  transformer
YOLOS
You Only Look at One Sequence (NeurIPS 2021)
Stars: ✭ 612 (+4.44%)
Mutual labels:  transformer
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (-17.41%)
Mutual labels:  transformer
Trident-Dehazing-Network
NTIRE 2020 NonHomogeneous Dehazing Challenge (CVPR Workshop 2020) 1st Solution.
Stars: ✭ 42 (-92.83%)
Mutual labels:  low-level-vision
alpr utils
ALPR model in unconstrained scenarios for Chinese license plates
Stars: ✭ 158 (-73.04%)
Mutual labels:  transformer
FNet-pytorch
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
Stars: ✭ 204 (-65.19%)
Mutual labels:  transformer
zero
Zero -- A neural machine translation system
Stars: ✭ 121 (-79.35%)
Mutual labels:  transformer
ECNDNet
Enhanced CNN for image denoising (CAAI Transactions on Intelligence Technology, 2019)
Stars: ✭ 58 (-90.1%)
Mutual labels:  low-level-vision
transformer-tensorflow2.0
transformer in tensorflow 2.0
Stars: ✭ 53 (-90.96%)
Mutual labels:  transformer
61-120 of 412 similar projects