All Projects → thomasbrandon → Mish Cuda

thomasbrandon / Mish Cuda

Licence: mit
Mish Activation Function for PyTorch

Projects that are alternatives of or similar to Mish Cuda

Paper Reading
深度学习论文阅读、数据仓库实践体验。比做算法的懂工程落地,比做工程的懂算法模型。
Stars: ✭ 101 (+0%)
Mutual labels:  jupyter-notebook
Maps Location History
Get, Concatenate and Process you location history from Google Maps TimeLine
Stars: ✭ 99 (-1.98%)
Mutual labels:  jupyter-notebook
Sequana
Sequana: a set of Snakemake NGS pipelines
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Jupyternotebooks Medium
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Mxnet Finetuner
An all-in-one Deep Learning toolkit for image classification to fine-tuning pretrained models using MXNet.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Awesome Pytorch List Cnversion
Awesome-pytorch-list 翻译工作进行中......
Stars: ✭ 1,361 (+1247.52%)
Mutual labels:  jupyter-notebook
Fiftyfizzbuzzes
Fifty different implementations of Fizzbuzz in Python.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Codeinquarantine
Stars: ✭ 101 (+0%)
Mutual labels:  jupyter-notebook
Data What Now
All codes from the DataWhatNow blog.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Dadoware
Brazilian-Portuguese word list and instructions booklet for Diceware
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Curved Lane Lines
detect curved lane lines using HSV filtering and sliding window search.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Deep Image Analogy Pytorch
Visual Attribute Transfer through Deep Image Analogy in PyTorch!
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Europilot
A toolkit for controlling Euro Truck Simulator 2 with python to develop self-driving algorithms.
Stars: ✭ 1,366 (+1252.48%)
Mutual labels:  jupyter-notebook
Bitmex Simple Trading Robot
Simple BitMEX trading robot.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Noworkflow
Supporting infrastructure to run scientific experiments without a scientific workflow management system.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Transfer Learning
Support code for the medium blog on transfer learning. Link to the blog in the Readme file.
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Text Classification
An example on how to train supervised classifiers for multi-label text classification using sklearn pipelines
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Airbnb Amenity Detection
Repo for 42 days project to replicate/improve Airbnb's amenity (object) detection pipeline.
Stars: ✭ 101 (+0%)
Mutual labels:  jupyter-notebook
Unet
Generic U-Net Tensorflow 2 implementation for semantic segmentation
Stars: ✭ 100 (-0.99%)
Mutual labels:  jupyter-notebook
Deep Learning Coursera
Deep Learning Specialization by Andrew Ng, deeplearning.ai.
Stars: ✭ 1,366 (+1252.48%)
Mutual labels:  jupyter-notebook

Mish-Cuda: Self Regularized Non-Monotonic Activation Function

This is a PyTorch CUDA implementation of the Mish activation by Diganta Misra (https://github.com/digantamisra98/).

Installation

It is currently distributed as a source only PyTorch extension. So you need a propely set up toolchain and CUDA compilers to install.

  1. Toolchain - In conda the cxx_linux-64 package provides an appropriate toolchain. However there can still be compatbility issues with this depending on system. You can also try with the system toolchian.
  2. CUDA Toolkit - The nVidia CUDA Toolkit is required in addition to drivers to provide needed headers and tools. Get the appropriate version for your Linux distro from nVidia or check for distro specific instructions otherwise.

It is important your CUDA Toolkit matches the version PyTorch is built for or errors can occur. Currently PyTorch builds for v10.0 and v9.2.

Performance

The CUDA implementation seems to mirror the learning perfomance of the original implementation and no stability issues have been observed. In terms of speed of the function it is fairly comparable with other PyTorch activation functions and significantly faster than the pure PyTorch implementation:

Profiling over 100 runs after 10 warmup runs.
Profiling on GeForce RTX 2070
Testing on torch.float16:
 relu_fwd:      223.7µs ± 1.026µs (221.6µs - 229.2µs)
 relu_bwd:      312.1µs ± 2.308µs (307.8µs - 317.4µs)
 softplus_fwd:  342.2µs ± 38.08µs (282.4µs - 370.6µs)
 softplus_bwd:  488.5µs ± 53.75µs (406.0µs - 528.4µs)
 mish_pt_fwd:   658.8µs ± 1.467µs (655.9µs - 661.9µs)
 mish_pt_bwd:   1.135ms ± 4.785µs (1.127ms - 1.145ms)
 mish_cuda_fwd: 267.3µs ± 1.852µs (264.5µs - 274.2µs)
 mish_cuda_bwd: 345.6µs ± 1.875µs (341.9µs - 349.8µs)

Testing on torch.float32:
 relu_fwd:      234.2µs ± 621.8ns (233.2µs - 235.7µs)
 relu_bwd:      419.3µs ± 1.238µs (417.8µs - 426.0µs)
 softplus_fwd:  255.1µs ± 753.6ns (252.4µs - 256.5µs)
 softplus_bwd:  420.2µs ± 631.4ns (418.2µs - 421.9µs)
 mish_pt_fwd:   797.4µs ± 1.094µs (795.4µs - 802.8µs)
 mish_pt_bwd:   1.689ms ± 1.222µs (1.686ms - 1.696ms)
 mish_cuda_fwd: 282.9µs ± 876.1ns (281.1µs - 287.8µs)
 mish_cuda_bwd: 496.3µs ± 1.781µs (493.6µs - 503.0µs)

Testing on torch.float64:
 relu_fwd:      450.4µs ± 879.7ns (448.8µs - 456.4µs)
 relu_bwd:      834.2µs ± 925.8ns (832.3µs - 838.8µs)
 softplus_fwd:  6.370ms ± 2.348µs (6.362ms - 6.375ms)
 softplus_bwd:  2.359ms ± 1.276µs (2.356ms - 2.365ms)
 mish_pt_fwd:   10.11ms ± 2.806µs (10.10ms - 10.12ms)
 mish_pt_bwd:   4.897ms ± 1.312µs (4.893ms - 4.901ms)
 mish_cuda_fwd: 8.989ms ± 3.646µs (8.980ms - 9.007ms)
 mish_cuda_bwd: 10.92ms ± 3.966µs (10.91ms - 10.93ms)

(Collected with test/perftest.py -b)

Note that double precision performance is very low. Some optimisation might be possible but this does not seem to be a common usage so is not a priority. Raise an issue if you have a use-case for it.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].