All Projects → michalwols → yann

michalwols / yann

Licence: MIT license
Yet Another Neural Network Library 🤔

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to yann

Echotorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch. EchoTorch is the only Python module available to easily create Deep Reservoir Computing models.
Stars: ✭ 231 (+788.46%)
Mutual labels:  torch
NeuroFlow
Awesome deep learning crate
Stars: ✭ 69 (+165.38%)
Mutual labels:  nn
Densenet
MXNet implementation for DenseNet
Stars: ✭ 28 (+7.69%)
Mutual labels:  nn
Gpt2 Newstitle
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。
Stars: ✭ 235 (+803.85%)
Mutual labels:  torch
dcgan vae pytorch
dcgan combined with vae in pytorch!
Stars: ✭ 110 (+323.08%)
Mutual labels:  nn
Credit-Card-Fraud
No description or website provided.
Stars: ✭ 17 (-34.62%)
Mutual labels:  nn
Visdial
[CVPR 2017] Torch code for Visual Dialog
Stars: ✭ 215 (+726.92%)
Mutual labels:  torch
neat-python
Python implementation of the NEAT neuroevolution algorithm
Stars: ✭ 32 (+23.08%)
Mutual labels:  nn
cheatsheets-ai-fork
Cheat Sheets for deep learning and machine learning.
Stars: ✭ 21 (-19.23%)
Mutual labels:  nn
graftr
graftr: an interactive shell to view and edit PyTorch checkpoints.
Stars: ✭ 89 (+242.31%)
Mutual labels:  torch
Floorplantransformation
Raster-to-Vector: Revisiting Floorplan Transformation
Stars: ✭ 243 (+834.62%)
Mutual labels:  torch
Artificial Intelligence Deep Learning Machine Learning Tutorials
A comprehensive list of Deep Learning / Artificial Intelligence and Machine Learning tutorials - rapidly expanding into areas of AI/Deep Learning / Machine Vision / NLP and industry specific areas such as Climate / Energy, Automotives, Retail, Pharma, Medicine, Healthcare, Policy, Ethics and more.
Stars: ✭ 2,966 (+11307.69%)
Mutual labels:  torch
Pytorch Book
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Stars: ✭ 9,546 (+36615.38%)
Mutual labels:  nn
Alphaction
Spatio-Temporal Action Localization System
Stars: ✭ 221 (+750%)
Mutual labels:  torch
lantern
[Android Library] Handling device flash as torch for Android.
Stars: ✭ 81 (+211.54%)
Mutual labels:  torch
Torchdata
PyTorch dataset extended with map, cache etc. (tensorflow.data like)
Stars: ✭ 226 (+769.23%)
Mutual labels:  torch
NMSIS
Nuclei Microcontroller Software Interface Standard Development Repo
Stars: ✭ 24 (-7.69%)
Mutual labels:  nn
deepgenres.torch
Predict the genre of a song using the Torch deep learning library
Stars: ✭ 18 (-30.77%)
Mutual labels:  torch
ThArrays.jl
A Julia interface for PyTorch's C++ backend, focusing on Tensor, AD, and JIT
Stars: ✭ 23 (-11.54%)
Mutual labels:  torch
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (+269.23%)
Mutual labels:  torch

yann (Yet Another Neural Network Library)

Yann is an extended version of torch.nn, adding a ton of sugar to make training models as fast and easy as possible.

Getting Started

Install

pip install yann

Train LeNet on MNIST

import torch
from torch import nn
from torchvision import transforms

import yann
from yann.train import Trainer
from yann.modules import Stack, Flatten, Infer
from yann.params import HyperParams, Choice, Range


class Params(HyperParams):
  dataset = 'MNIST'
  batch_size = 32
  epochs = 10
  optimizer: Choice(('SGD', 'Adam')) = 'SGD'
  learning_rate: Range(.01, .0001) = .01
  momentum = 0

  seed = 1

# parse command line arguments
params = Params.from_command()

# set random, numpy and pytorch seeds in one call
yann.seed(params.seed)

lenet = Stack(
  Infer(nn.Conv2d, 10, kernel_size=5),
  nn.MaxPool2d(2),
  nn.ReLU(inplace=True),
  Infer(nn.Conv2d, 20, kernel_size=5),
  nn.MaxPool2d(2),
  nn.ReLU(inplace=True),
  Flatten(),
  Infer(nn.Linear, 50),
  nn.ReLU(inplace=True),
  Infer(nn.Linear, 10),
  activation=nn.LogSoftmax(dim=1)
)

# run a forward pass to infer input shapes using `Infer` modules
lenet(torch.rand(1, 1, 28, 28))

# use the registry to resolve optimizer name to an optimizer class
optimizer = yann.resolve.optimizer(
  params.optimizer,
  yann.trainable(lenet.parameters()),
  momentum=params.momentum,
  lr=params.learning_rate
)

train = Trainer(
  model=lenet,
  optimizer=optimizer,
  dataset=params.dataset,
  batch_size=params.batch_size,
  transform=transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
  ]),
  loss='nll_loss',
  metrics=('accuracy',)
)

train(params.epochs)

# save checkpoint
train.checkpoint()

# plot the loss curve
train.history.plot()

view the generated cli help

python train.py -h
-h
usage: train_mnist.py [-h] [-o {SGD,Adam}] [-lr LEARNING_RATE] [-d DATASET]
                      [-bs BATCH_SIZE] [-e EPOCHS] [-m MOMENTUM] [-s SEED]

optional arguments:
  -h, --help            show this help message and exit
  -o {SGD,Adam}, --optimizer {SGD,Adam}
                        optimizer (default: SGD)
  -lr LEARNING_RATE, --learning_rate LEARNING_RATE
                        learning_rate (default: 0.01)
  -d DATASET, --dataset DATASET
                        dataset (default: MNIST)
  -bs BATCH_SIZE, --batch_size BATCH_SIZE
                        batch_size (default: 32)
  -e EPOCHS, --epochs EPOCHS
                        epochs (default: 10)
  -m MOMENTUM, --momentum MOMENTUM
                        momentum (default: 0)
  -s SEED, --seed SEED  seed (default: 1)

then start a training run

python train.py -bs=16

which should print the following to stdout

Params(
  optimizer=SGD,
  learning_rate=0.01,
  dataset=MNIST,
  batch_size=16,
  epochs=10,
  momentum=0,
  seed=1
)
Starting training

name: MNIST-Stack
root: train-runs/MNIST-Stack/19-09-25T18:02:52
batch_size: 16
device: cpu

MODEL
=====

Stack(
  (infer0): Infer(
    (module): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
  )
  (max_pool2d0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (re_lu0): ReLU(inplace=True)
  (infer1): Infer(
    (module): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
  )
  (max_pool2d1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (re_lu1): ReLU(inplace=True)
  (flatten0): Flatten()
  (infer2): Infer(
    (module): Linear(in_features=320, out_features=50, bias=True)
  )
  (re_lu2): ReLU(inplace=True)
  (infer3): Infer(
    (module): Linear(in_features=50, out_features=10, bias=True)
  )
  (activation): LogSoftmax()
)


DATASET
=======

TransformDataset(
Dataset: Dataset MNIST
    Number of datapoints: 60000
    Root location: /Users/michal/.torch/datasets/MNIST
    Split: Train
Transforms: (Compose(
    ToTensor()
    Normalize(mean=(0.1307,), std=(0.3081,))
),)
)


LOADER
======

<torch.utils.data.dataloader.DataLoader object at 0x1a45cc8940>

LOSS
====

<function nll_loss at 0x120b700d0>


OPTIMIZER
=========

SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0
    nesterov: False
    weight_decay: 0.0001
)

SCHEDULER
=========

None


PROGRESS
========
epochs: 0
steps: 0
samples: 0


Starting epoch 0

OPTIMIZER
=========

SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0
    nesterov: False
    weight_decay: 0.0001
)


PROGRESS
========
epochs: 0
steps: 0
samples: 0


Batch inputs shape: (16, 1, 28, 28)
Batch targets shape: (16,)
Batch outputs shape: (16, 10)

batch:        0	accuracy: 0.1875	loss: 2.3783
batch:      128	accuracy: 0.6250	loss: 2.0528
batch:      256	accuracy: 0.6875	loss: 0.6222
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].