All Projects → ducha-aiki → google-retrieval-challenge-2019-fastai-starter

ducha-aiki / google-retrieval-challenge-2019-fastai-starter

Licence: other
fast.ai starter kit for Google Landmark Retrieval 2019 challenge

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to google-retrieval-challenge-2019-fastai-starter

Deep-Learning-Experiments-implemented-using-Google-Colab
Colab Compatible FastAI notebooks for NLP and Computer Vision Datasets
Stars: ✭ 16 (-74.19%)
Mutual labels:  kaggle, fastai
kuzushiji-recognition
Kuzushiji Recognition Kaggle 2019. Build a DL model to transcribe ancient Kuzushiji into contemporary Japanese characters. Opening the door to a thousand years of Japanese culture.
Stars: ✭ 16 (-74.19%)
Mutual labels:  kaggle, fastai
libfmp
libfmp - Python package for teaching and learning Fundamentals of Music Processing (FMP)
Stars: ✭ 71 (+14.52%)
Mutual labels:  retrieval
fastai-docker-deploy
Deploy fastai models with Docker
Stars: ✭ 19 (-69.35%)
Mutual labels:  fastai
fastknn
Fast k-Nearest Neighbors Classifier for Large Datasets
Stars: ✭ 64 (+3.23%)
Mutual labels:  kaggle
autogbt-alt
An experimental Python package that reimplements AutoGBT using LightGBM and Optuna.
Stars: ✭ 76 (+22.58%)
Mutual labels:  kaggle
kaggle-human-protein-atlas-image-classification
Kaggle 2018 @ Human Protein Atlas Image Classification
Stars: ✭ 34 (-45.16%)
Mutual labels:  kaggle
Music-Genre-Classification
Genre Classification using Convolutional Neural Networks
Stars: ✭ 27 (-56.45%)
Mutual labels:  fastai
Kaggle-Competition-Sberbank
Top 1% rankings (22/3270) code sharing for Kaggle competition Sberbank Russian Housing Market: https://www.kaggle.com/c/sberbank-russian-housing-market
Stars: ✭ 31 (-50%)
Mutual labels:  kaggle
kaggle-satellite-imagery-feature-detection
Satellite Imagery Feature Detection (68 out of 419)
Stars: ✭ 29 (-53.23%)
Mutual labels:  kaggle
kaggle-plasticc
Solution to Kaggle's PLAsTiCC Astronomical Classification Competition
Stars: ✭ 50 (-19.35%)
Mutual labels:  kaggle
kaggle-tools
Some tools that I often find myself using in Kaggle challenges.
Stars: ✭ 33 (-46.77%)
Mutual labels:  kaggle
kaggle-champs-scalar-coupling
19th place solution in "Predicting Molecular Properties"
Stars: ✭ 26 (-58.06%)
Mutual labels:  kaggle
kaggle getting started
Kaggle getting started competition examples
Stars: ✭ 18 (-70.97%)
Mutual labels:  kaggle
salbow
Saliency Weighted Convolutional Features for Instance Search
Stars: ✭ 55 (-11.29%)
Mutual labels:  retrieval
Kaggle-Sea-Lions-Solution
NOAA Fisheries Steller Sea Lion Population Count
Stars: ✭ 13 (-79.03%)
Mutual labels:  kaggle
rnn darts fastai
Implement Differentiable Architecture Search (DARTS) for RNN with fastai
Stars: ✭ 21 (-66.13%)
Mutual labels:  fastai
data-visualization-deck-gl
A experiment to visualize Tree in NewYork and Flight record data. Using Deck.gl and Kaggle
Stars: ✭ 54 (-12.9%)
Mutual labels:  kaggle
imaterialist-furniture-2018
Kaggle competition
Stars: ✭ 76 (+22.58%)
Mutual labels:  kaggle
RTX-2080Ti-Vs-GTX-1080Ti-CIFAR-100-Benchmarks
No description or website provided.
Stars: ✭ 16 (-74.19%)
Mutual labels:  fastai

Google Landmark Retrieval 2019 Competition fast.ai Starter Pack

The code here is all you need to do the first submission to the Google Landmark Retrieval 2019 Competition. It is based on FastAi library release 1.0.47 and borrows helpser code from great cnnimageretrieval-pytorch library. The latter gives much better results than code in the repo, but not ready-to-make submission and takes 3 days to converge compared to 45 min here.

Making first submission

  1. Install the fastai library, specifically version 1.0.47.

  2. Install the faiss library. conda install faiss-gpu cudatoolkit=9.0 -c pytorch-y

  3. Clone this repository.

  4. Start the download process for the data. It would take a lot, so in mean time you can run the code.

  5. Because the code here does not depend on competition data for training, only for submission.

Notebooks

  1. download-and-create-microtrain - download all the aux data for training and validation
  2. validation-no-training - playing with pretrained networks and setting up validation procedure
  3. training-validate-bad - training DenseNet121 on created micro-train in 45 min and playing with post-processing. It works as described, but just because of pure luck: lots of different "subclusters" == labels are depicting the same landmark. So, do not use it for training of all 19k subclusters
  4. training-validate-good-full - Instead, use "clusters" as a labels, it gives much better results.
  5. submission-trained - creating a first submission. Warning, this could take a lot (~4-12 hours) because of the dataset size
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].