All Projects → robinniesert → kaggle-champs

robinniesert / kaggle-champs

Licence: MIT license
Code for the CHAMPS Predicting Molecular Properties Kaggle competition

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to kaggle-champs

well-classified-examples-are-underestimated
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
Stars: ✭ 21 (-57.14%)
Mutual labels:  transformer, graph-neural-networks
Walk-Transformer
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Stars: ✭ 26 (-46.94%)
Mutual labels:  transformer, graph-neural-networks
graphtrans
Representing Long-Range Context for Graph Neural Networks with Global Attention
Stars: ✭ 45 (-8.16%)
Mutual labels:  transformer, graph-neural-networks
Kaggle Quora Insincere Questions Classification
Kaggle新赛(baseline)-基于BERT的fine-tuning方案+基于tensor2tensor的Transformer Encoder方案
Stars: ✭ 66 (+34.69%)
Mutual labels:  kaggle, transformer
php-serializer
Serialize PHP variables, including objects, in any format. Support to unserialize it too.
Stars: ✭ 47 (-4.08%)
Mutual labels:  transformer
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+216.33%)
Mutual labels:  graph-neural-networks
Awesome-low-level-vision-resources
A curated list of resources for Low-level Vision Tasks
Stars: ✭ 35 (-28.57%)
Mutual labels:  transformer
bytekit
Java 字节操作的工具库(不是字节码的工具库)
Stars: ✭ 40 (-18.37%)
Mutual labels:  transformer
cape
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
Stars: ✭ 29 (-40.82%)
Mutual labels:  transformer
SelfGNN
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which appeared in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).
Stars: ✭ 24 (-51.02%)
Mutual labels:  graph-neural-networks
ru-dalle
Generate images from texts. In Russian
Stars: ✭ 1,606 (+3177.55%)
Mutual labels:  transformer
keras-vision-transformer
The Tensorflow, Keras implementation of Swin-Transformer and Swin-UNET
Stars: ✭ 91 (+85.71%)
Mutual labels:  transformer
project-code-py
Leetcode using AI
Stars: ✭ 100 (+104.08%)
Mutual labels:  transformer
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+42.86%)
Mutual labels:  graph-neural-networks
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+479.59%)
Mutual labels:  transformer
StoreItemDemand
(117th place - Top 26%) Deep learning using Keras and Spark for the "Store Item Demand Forecasting" Kaggle competition.
Stars: ✭ 24 (-51.02%)
Mutual labels:  kaggle
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+759.18%)
Mutual labels:  transformer
Cross-lingual-Summarization
Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention
Stars: ✭ 28 (-42.86%)
Mutual labels:  transformer
ViTs-vs-CNNs
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (+195.92%)
Mutual labels:  transformer
R-MeN
Transformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Stars: ✭ 74 (+51.02%)
Mutual labels:  transformer

CHAMPS Predicting Molecular Properties (6th Place Solution)

Repository contains the source code for training the main model used to get 6th place: the molecular Transformer model with message passing layers. A more detailed description of the model is posted here: https://www.kaggle.com/c/champs-scalar-coupling/discussion/106407#latest-614111.

To get the processed data used for training, create a 'proc_data' folder and run preprocess.py and create_crossfolds.py in that order. Ensembling is done with the notebooks in the nbs folder.

Example usage for training in distributed mode on 2 GPUs:

python -m torch.distributed.launch --nproc_per_node=2 train.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].