All Projects → Alphafold2 → Similar Projects or Alternatives

807 Open source projects that are alternatives of or similar to Alphafold2

Point Transformer Pytorch
Implementation of the Point Transformer layer, in Pytorch
Stars: ✭ 199 (-33.22%)
Linear Attention Transformer
Transformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: ✭ 205 (-31.21%)
Bottleneck Transformer Pytorch
Implementation of Bottleneck Transformer in Pytorch
Stars: ✭ 408 (+36.91%)
Global Self Attention Network
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-78.52%)
Reformer Pytorch
Reformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (+451.68%)
Isab Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-92.95%)
Vit Pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+2315.77%)
Simplednn
SimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to support relevant neural network architectures in natural language processing tasks
Stars: ✭ 81 (-72.82%)
Perceiver Pytorch
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
Stars: ✭ 130 (-56.38%)
Se3 Transformer Pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Stars: ✭ 73 (-75.5%)
Routing Transformer
Fully featured implementation of Routing Transformer
Stars: ✭ 149 (-50%)
Dalle Pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+1128.52%)
Sinkhorn Transformer
Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
Stars: ✭ 156 (-47.65%)
Slot Attention
Implementation of Slot Attention from GoogleAI
Stars: ✭ 168 (-43.62%)
Self Attention Cv
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-29.87%)
Timesformer Pytorch
Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
Stars: ✭ 225 (-24.5%)
Performer Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+83.22%)
Lambda Networks
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute
Stars: ✭ 1,497 (+402.35%)
X Transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
Stars: ✭ 211 (-29.19%)
Linformer Pytorch
My take on a practical implementation of Linformer for Pytorch.
Stars: ✭ 239 (-19.8%)
L2c
Learning to Cluster. A deep clustering strategy.
Stars: ✭ 262 (-12.08%)
Mutual labels:  artificial-intelligence
Deeplearningnotes
《深度学习》花书手推笔记
Stars: ✭ 257 (-13.76%)
Mutual labels:  artificial-intelligence
Dalle Mtf
Open-AI's DALL-E for large scale training in mesh-tensorflow.
Stars: ✭ 250 (-16.11%)
Mutual labels:  artificial-intelligence
Atlas
An Open Source, Self-Hosted Platform For Applied Deep Learning Development
Stars: ✭ 259 (-13.09%)
Mutual labels:  artificial-intelligence
Dreamerv2
Mastering Atari with Discrete World Models
Stars: ✭ 287 (-3.69%)
Mutual labels:  artificial-intelligence
Apc Vision Toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
Stars: ✭ 277 (-7.05%)
Mutual labels:  artificial-intelligence
Es Dev Stack
An on-premises, bare-metal solution for deploying GPU-powered applications in containers
Stars: ✭ 257 (-13.76%)
Mutual labels:  artificial-intelligence
Shufflenet
ShuffleNet in PyTorch. Based on https://arxiv.org/abs/1707.01083
Stars: ✭ 262 (-12.08%)
Mutual labels:  artificial-intelligence
Multi Scale Attention
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"
Stars: ✭ 281 (-5.7%)
Mutual labels:  attention-mechanism
Datascience course
Curso de Data Science em Português
Stars: ✭ 294 (-1.34%)
Mutual labels:  artificial-intelligence
Polyaxon
Machine Learning Platform for Kubernetes (MLOps tools for experimentation and automation)
Stars: ✭ 2,966 (+895.3%)
Mutual labels:  artificial-intelligence
Machinelearning
Machine learning resources
Stars: ✭ 3,042 (+920.81%)
Mutual labels:  artificial-intelligence
Machine Learning And Ai In Trading
Applying Machine Learning and AI Algorithms applied to Trading for better performance and low Std.
Stars: ✭ 258 (-13.42%)
Mutual labels:  artificial-intelligence
Gophersat
gophersat, a SAT solver in Go
Stars: ✭ 300 (+0.67%)
Mutual labels:  artificial-intelligence
Transformer
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (-9.06%)
Mutual labels:  attention-mechanism
Iamdinosaur
🦄 An Artificial Inteligence to teach Google's Dinosaur to jump cactus
Stars: ✭ 2,767 (+828.52%)
Mutual labels:  artificial-intelligence
Ai Job Notes
AI算法岗求职攻略(涵盖准备攻略、刷题指南、内推和AI公司清单等资料)
Stars: ✭ 3,191 (+970.81%)
Mutual labels:  artificial-intelligence
Da Rnn
📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (-14.09%)
Mutual labels:  attention-mechanism
Pyswip
PySwip is a Python - SWI-Prolog bridge enabling to query SWI-Prolog in your Python programs. It features an (incomplete) SWI-Prolog foreign language interface, a utility class that makes it easy querying with Prolog and also a Pythonic interface.
Stars: ✭ 276 (-7.38%)
Mutual labels:  artificial-intelligence
Amazing Python Scripts
🚀 Curated collection of Amazing Python scripts from Basics to Advance with automation task scripts.
Stars: ✭ 229 (-23.15%)
Mutual labels:  artificial-intelligence
Fakenewscorpus
A dataset of millions of news articles scraped from a curated list of data sources.
Stars: ✭ 255 (-14.43%)
Mutual labels:  artificial-intelligence
Olivia
💁‍♀️Your new best friend powered by an artificial neural network
Stars: ✭ 3,114 (+944.97%)
Mutual labels:  artificial-intelligence
Articutapi
API of Articut 中文斷詞 (兼具語意詞性標記):「斷詞」又稱「分詞」,是中文資訊處理的基礎。Articut 不用機器學習,不需資料模型,只用現代白話中文語法規則,即能達到 SIGHAN 2005 F1-measure 94% 以上,Recall 96% 以上的成績。
Stars: ✭ 252 (-15.44%)
Mutual labels:  artificial-intelligence
Mirnet
Official repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (-17.11%)
Mutual labels:  attention-mechanism
Attention is all you need
Transformer of "Attention Is All You Need" (Vaswani et al. 2017) by Chainer.
Stars: ✭ 303 (+1.68%)
Mutual labels:  attention-mechanism
Graphbrain
Language, Knowledge, Cognition
Stars: ✭ 294 (-1.34%)
Mutual labels:  artificial-intelligence
Building A Simple Chatbot In Python Using Nltk
Building a Simple Chatbot from Scratch in Python (using NLTK)
Stars: ✭ 286 (-4.03%)
Mutual labels:  artificial-intelligence
Strips
AI Automated Planning with STRIPS and PDDL in Node.js
Stars: ✭ 272 (-8.72%)
Mutual labels:  artificial-intelligence
Writing-editing-Network
Code for Paper Abstract Writing through Editing Mechanism
Stars: ✭ 72 (-75.84%)
Mutual labels:  attention-mechanism
galerkin-transformer
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-62.75%)
Mutual labels:  attention-mechanism
Cryptocurrency Price Prediction
Cryptocurrency Price Prediction Using LSTM neural network
Stars: ✭ 271 (-9.06%)
Mutual labels:  artificial-intelligence
linformer
Implementation of Linformer for Pytorch
Stars: ✭ 119 (-60.07%)
Mutual labels:  attention-mechanism
transganformer
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-54.03%)
Mutual labels:  attention-mechanism
Awesome Blockchain Ai
A curated list of Blockchain projects for Artificial Intelligence and Machine Learning
Stars: ✭ 283 (-5.03%)
Mutual labels:  artificial-intelligence
Caffe Hrt
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
Stars: ✭ 271 (-9.06%)
Mutual labels:  artificial-intelligence
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-93.29%)
Mutual labels:  attention-mechanism
vista-net
Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Stars: ✭ 67 (-77.52%)
Mutual labels:  attention-mechanism
Dreamer
Dream to Control: Learning Behaviors by Latent Imagination
Stars: ✭ 269 (-9.73%)
Mutual labels:  artificial-intelligence
Retinal-Disease-Diagnosis-With-Residual-Attention-Networks
Using Residual Attention Networks to diagnose retinal diseases in medical images
Stars: ✭ 14 (-95.3%)
Mutual labels:  attention-mechanism
attention-guided-sparsity
Attention-Based Guided Structured Sparsity of Deep Neural Networks
Stars: ✭ 26 (-91.28%)
Mutual labels:  attention-mechanism
1-60 of 807 similar projects