All Projects → ryul99 → Pytorch Project Template

ryul99 / Pytorch Project Template

Licence: apache-2.0
Deep Learning project template for PyTorch (Distributed Learning is supported)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Project Template

Ansible Skeleton
The skeleton to create new ansible roles.
Stars: ✭ 5 (-93.42%)
Mutual labels:  yaml, template
Perun
A command-line validation tool for AWS Cloud Formation that allows to conquer the cloud faster!
Stars: ✭ 82 (+7.89%)
Mutual labels:  yaml, template
Nn Template
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.
Stars: ✭ 145 (+90.79%)
Mutual labels:  hydra, template
Yamllint
A linter for YAML files.
Stars: ✭ 1,750 (+2202.63%)
Mutual labels:  lint, yaml
KaiZen-OpenApi-Parser
High-performance Parser, Validator, and Java Object Model for OpenAPI 3.x
Stars: ✭ 119 (+56.58%)
Mutual labels:  lint, yaml
Config Lint
Command line tool to validate configuration files
Stars: ✭ 118 (+55.26%)
Mutual labels:  lint, yaml
Zenbu
🏮 A Jinja2 + YAML based config templater.
Stars: ✭ 114 (+50%)
Mutual labels:  yaml, template
Swaggen
OpenAPI/Swagger 3.0 Parser and Swift code generator
Stars: ✭ 385 (+406.58%)
Mutual labels:  yaml, template
Khayyam
106 Omar Khayyam quatrains in YAML format.
Stars: ✭ 8 (-89.47%)
Mutual labels:  yaml, dataset
Uikit Computer Store Template
Computer store e-commerce template
Stars: ✭ 72 (-5.26%)
Mutual labels:  template
Bootstrap Xd
Bootstrap Design Template — Assets Library — for Adobe XD
Stars: ✭ 74 (-2.63%)
Mutual labels:  template
Jokeapi
A REST API that serves uniformly and well formatted jokes in JSON, XML, YAML or plain text format that also offers a great variety of filtering methods
Stars: ✭ 71 (-6.58%)
Mutual labels:  yaml
Covid19
JSON time-series of coronavirus cases (confirmed, deaths and recovered) per country - updated daily
Stars: ✭ 1,177 (+1448.68%)
Mutual labels:  dataset
Sketchyscene
SketchyScene: Richly-Annotated Scene Sketches. (ECCV 2018)
Stars: ✭ 74 (-2.63%)
Mutual labels:  dataset
Csvpack
csvpack library / gem - tools 'n' scripts for working with tabular data packages using comma-separated values (CSV) datafiles in text with meta info (that is, schema, datatypes, ..) in datapackage.json; download, read into and query CSV datafiles with your SQL database (e.g. SQLite, PostgreSQL, ...) of choice and much more
Stars: ✭ 71 (-6.58%)
Mutual labels:  dataset
Tju Dhd
A newly built high-resolution dataset for object detection and pedestrian detection (IEEE TIP 2020)
Stars: ✭ 75 (-1.32%)
Mutual labels:  dataset
Profile Card
Tailwind CSS Starter Template - Profile Card (Single page website for your profile/links)
Stars: ✭ 69 (-9.21%)
Mutual labels:  template
Toronto 3d
A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways
Stars: ✭ 69 (-9.21%)
Mutual labels:  dataset
Color Names
Large list of handpicked color names 🌈
Stars: ✭ 1,198 (+1476.32%)
Mutual labels:  dataset
Ml project template
Machine Learning Project Template - Ready to production
Stars: ✭ 75 (-1.32%)
Mutual labels:  template

Deep Learning Project Template for PyTorch

Feature

  • TensorBoardX / wandb support
  • Background generator is used (reason of using background generator)
    • In Windows, background generator could not be supported. So if error occurs, set false to use_background_generator in config
  • Training state and network checkpoint saving, loading
    • Training state includes not only network weights, but also optimizer, step, epoch.
    • Checkpoint includes only network weights. This could be used for inference.
  • Hydra and Omegaconf is supported
  • Distributed Learning using Distributed Data Parallel is supported
  • Config with yaml file / easy dot-style access to config
  • Code lint / CI
  • Code Testing with pytest

Code Structure

  • config dir: folder for config files
  • dataset dir: dataloader and dataset codes are here. Also, put dataset in meta dir.
  • model dir: model.py is for wrapping network architecture. model_arch.py is for coding network architecture.
  • test dir: folder for pytest testing codes. You can check your network's flow of tensor by fixing tests/model/net_arch_test.py. Just copy & paste Net_arch.forward method to net_arch_test.py and add assert phrase to check tensor.
  • utils dir:
    • train_model.py and test_model.py are for train and test model once.
    • utils.py is for utility. random seed setting, dot-access hyper parameter, get commit hash, etc are here.
    • writer.py is for writing logs in tensorboard / wandb.
  • trainer.py file: this is for setting up and iterating epoch.

Setup

Install requirements

  • python3 (3.6, 3.7, 3.8 is tested)
  • Write PyTorch version which you want to requirements.txt. (https://pytorch.org/get-started/)
  • pip install -r requirements.txt

Config

  • Config is written in yaml file
    • You can choose configs at config/default.yaml. Custom configs are under config/job/
  • name is train name you run.
  • working_dir is root directory for saving checkpoints, logging logs.
  • device is device mode for running your model. You can choose cpu or cuda
  • data field
    • Configs for Dataloader.
    • glob train_dir / test_dir with file_format for Dataloader.
    • If divide_dataset_per_gpu is true, origin dataset is divide into sub dataset for each gpu. This could mean the size of origin dataset should be multiple of number of using gpu. If this option is false, dataset is not divided but epoch goes up in multiple of number of gpus.
  • train/test field
    • Configs for training options.
    • random_seed is for setting python, numpy, pytorch random seed.
    • num_epoch is for end iteration step of training.
    • optimizer is for selecting optimizer. Only adam optimizer is supported for now.
    • dist is for configuring Distributed Data Parallel.
      • gpus is the number that you want to use with DDP (gpus value is used at world_size in DDP). Not using DDP when gpus is 0, using all gpus when gpus is -1.
      • timeout is seconds for timeout of process interaction in DDP. When this is set as ~, default timeout (1800 seconds) is applied in gloo mode and timeout is turned off in nccl mode.
  • model field
    • Configs for Network architecture and options for model.
    • You can add configs in yaml format to config your network.
  • log field
    • Configs for logging include tensorboard / wandb logging.
    • summary_interval and checkpoint_interval are interval of step and epoch between training logging and checkpoint saving.
    • checkpoint and logs are saved under working_dir/chkpt_dir and working_dir/trainer.log. Tensorboard logs are saving under working_dir/outputs/tensorboard
  • load field
    • loading from wandb server is supported
    • wandb_load_path is Run path in overview of run. If you don't want to use wandb load, this field should be ~.
    • network_chkpt_path is path to network checkpoint file. If using wandb loading, this field should be checkpoint file name of wandb run.
    • resume_state_path is path to training state file. If using wandb loading, this field should be training state file name of wandb run.

Code lint

  1. pip install -r requirements-dev.txt for install develop dependencies (this requires python 3.6 and above because of black)

  2. pre-commit install for adding pre-commit to git hook

Train

  • python trainer.py working_dir=$(pwd)

Inspired by

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].