All Projects → AIcrowd → flatland-challenge-starter-kit

AIcrowd / flatland-challenge-starter-kit

Licence: other
⚠️ NOTICE: This starter kit was used for 2019 challenge and has been deprecated in favour of 2020 Flatland challenge's starter kit present here

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to flatland-challenge-starter-kit

IsSeptaFcked
Real-time status for Philadelphia Regional Rail
Stars: ✭ 47 (+176.47%)
Mutual labels:  trains
renfe-cli
python CLI for fast Spanish Renfe timetables retrieval - now with selenium
Stars: ✭ 29 (+70.59%)
Mutual labels:  trains
nodejs-ns-api
Unofficial NodeJS module for Nederlandse Spoorwegen API
Stars: ✭ 13 (-23.53%)
Mutual labels:  trains
Tchou-Tchou
🚂 A menu bar app for macOS that displays the speed of the TGV you are travelling with.
Stars: ✭ 13 (-23.53%)
Mutual labels:  trains
clearml-server-helm
ClearML Server for Kubernetes Clusters Using Helm
Stars: ✭ 18 (+5.88%)
Mutual labels:  trains
Clearml
ClearML - Auto-Magical CI/CD to streamline your ML workflow. Experiment Manager, MLOps and Data-Management
Stars: ✭ 2,868 (+16770.59%)
Mutual labels:  trains
pyinrail
A python wrapper for Indian Railways Enquiry API!
Stars: ✭ 40 (+135.29%)
Mutual labels:  trains
jr
Putting the "train" in training
Stars: ✭ 58 (+241.18%)
Mutual labels:  trains

⚠️ NOTICE: This starter kit was used for 2019 challenge and has been deprecated in favour of 2020 Flatland challenge's starter kit present here: https://gitlab.aicrowd.com/flatland/neurips2020-flatland-starter-kit

AIcrowd-Logo

Flatland Challenge Starter Kit

gitter-badge

Instructions to make submissions to the SBB CFF Flatland Challenge.

Participants will have to submit their code, with packaging specifications, and the evaluator will automatically build a docker image and execute their agent against an arbitrary number of pre-generated flatland environments.

Dependencies

  • Anaconda (By following instructions here) At least version 4.5.11 is required to correctly populate environment.yml.
  • flatland-rl (By following instructions here) IMPORTANT : Please note that you will need flatland-rl version >=2.1.7 to be able to submit.

Setup

  • Clone the repository
git clone [email protected]:AIcrowd/flatland-challenge-starter-kit.git
cd flatland-challenge-starter-kit
  • Create a conda environment from the provided environment.yml
conda env create -f environment.yml
  • Activate the conda environment and install your code specific dependencies
conda activate flatland-rl
# If say you want to install PyTorch
# conda install pytorch torchvision -c pytorch
#
# or you can even use pip to install any additional packages
# for example : 
# pip install -U flatland-rl
# which updates the flatland-rl package to the latest version

Test Submission Locally

  • First lets begin by downloading a small set of test envs, and put them at a location of your choice. In this exercise, we assume that you will download the test-envs provided at : https://www.aicrowd.com/challenges/flatland-challenge/dataset_files, and will untar them inside ./scratch/test-envs, so that you have a directory structure similar to :
./scratch
└── test-envs
    ├── Test_0
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_1
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_2
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_3
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_4
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_5
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_6
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_7
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    ├── Test_8
    │   ├── Level_0.pkl
    │   └── Level_1.pkl
    └── Test_9
        ├── Level_0.pkl
        └── Level_1.pkl
  • redis-server : NOTE : Please ensure that you have a redis-server running on localhost. You can find more instructions on how to run redis here

  • Run evaluator


# In a separate tab : run local grader
flatland-evaluator --tests <path_to_your_tests_directory>

# If you downloaded the files to the location we specified above, then you should be running : 
flatland-evaluator --tests ./scratch/test-envs/
  • Run Agent(s)
# In a separate tab :
export AICROWD_TESTS_FOLDER=<path_to_your_tests_directory>
# or on Windows :
# 
#  SET AICROWD_TESTS_FOLDER=<path_to_your_tests_directory>
python run.py

How do I specify my software runtime ?

The software runtime is specified by exporting your conda env to the root of your repository by doing :

# The included environment.yml is generated by the command below, and you do not need to run it again
# if you did not add any custom dependencies

conda env export --no-build > environment.yml

# Note the `--no-build` flag, which is important if you want your anaconda env to be replicable across all

This environment.yml file will be used to recreate the conda environment inside the Docker container. This repository includes an example environment.yml

You can specify your software environment by using all the available configuration options of repo2docker. (But please remember to use aicrowd-repo2docker to have GPU support)

What should my code structure be like ?

Please follow the structure documented in the included run.py to adapt your already existing code to the required structure for this round.

Important Concepts

Repository Structure

  • aicrowd.json Each repository should have a aicrowd.json with the following content :
{
  "challenge_id": "aicrowd_flatland_challenge_2019",
  "grader_id": "aicrowd_flatland_challenge_2019",
  "authors": ["your-aicrowd-username"],
  "description": "sample description about your awesome agent",
  "license": "MIT",
  "debug": false
}

This is used to map your submission to the said challenge, so please remember to use the correct challenge_id and grader_id as specified above.

If you set debug to true, then the evaluation will run on a separate set of 20 environments, and the logs from your submitted code (if it fails), will be made available to you to help you debug. NOTE : IMPORTANT : By default we have set debug:false, so when you have done the basic integration testing of your code, and are ready to make a final submission, please do make sure to set debug to true in aicrowd.json.

Code Entrypoint

The evaluator will use /home/aicrowd/run.sh as the entrypoint, so please remember to have a run.sh at the root, which can instantitate any necessary environment variables, and also start executing your actual code. This repository includes a sample run.sh file. If you are using a Dockerfile to specify your software environment, please remember to create a aicrowd user, and place the entrypoint code at run.sh. If you are unsure what this is all about, you can let run.sh be as is, and instead focus on the run.py which is being called from within run.sh.

Submission

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com/.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository on https://gitlab.aicrowd.com/. Any tag push (where the tag name begins with "submission-") to your private repository is considered as a submission
Then you can add the correct git remote, and finally submit by doing :

cd flatland-challenge-starter-kit
# Add AIcrowd git remote endpoint
git remote add aicrowd [email protected]:<YOUR_AICROWD_USER_NAME>/flatland-challenge-starter-kit.git
git push aicrowd master

# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1

# Note : If the contents of your repository (latest commit hash) does not change,
# then pushing a new tag will **not** trigger a new evaluation.

You now should be able to see the details of your submission at : gitlab.aicrowd.com//<YOUR_AICROWD_USER_NAME>/flatland-challenge-starter-kit/issues

NOTE: Remember to update your username in the link above 😉

In the link above, you should start seeing something like this take shape (the whole evaluation can take a bit of time, so please be a bit patient too 😉 ) :

Best of Luck 🎉 🎉

Author

Sharada Mohanty

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].