51. h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
52. egnn-pytorchImplementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
53. RETRO-pytorchImplementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
54. glom-pytorchAn attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up processing, and attention (consensus between columns), for emergent part-whole heirarchies from data
55. axial-attentionImplementation of Axial attention - attending to multi-dimensional data efficiently
59. En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
60. uniformer-pytorchImplementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, debuted in ICLR 2022
61. res-mlp-pytorchImplementation of ResMLP, an all MLP solution to image classification, in Pytorch
62. g-mlp-pytorchImplementation of gMLP, an all-MLP replacement for Transformers, in Pytorch
63. progenImplementation and replication of ProGen, Language Modeling for Protein Generation, in Jax
65. invariant-point-attentionImplementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
67. STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification