All Projects → CoinCheung → Pytorch Loss

CoinCheung / Pytorch Loss

Licence: mit
label-smooth, amsoftmax, focal-loss, triplet-loss, lovasz-softmax. Maybe useful

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Loss

Kmcuda
Large scale K-means and K-nn implementation on NVIDIA GPU / CUDA
Stars: ✭ 627 (-22.78%)
Mutual labels:  cuda
Deep Painterly Harmonization
Code and data for paper "Deep Painterly Harmonization": https://arxiv.org/abs/1804.03189
Stars: ✭ 6,027 (+642.24%)
Mutual labels:  cuda
Numba
NumPy aware dynamic Python compiler using LLVM
Stars: ✭ 7,090 (+773.15%)
Mutual labels:  cuda
Mc Cnn
Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
Stars: ✭ 638 (-21.43%)
Mutual labels:  cuda
Cuda Convnet2
Automatically exported from code.google.com/p/cuda-convnet2
Stars: ✭ 690 (-15.02%)
Mutual labels:  cuda
Juice
The Hacker's Machine Learning Engine
Stars: ✭ 743 (-8.5%)
Mutual labels:  cuda
Twostreamfusion
Code release for "Convolutional Two-Stream Network Fusion for Video Action Recognition", CVPR 2016.
Stars: ✭ 618 (-23.89%)
Mutual labels:  cuda
Blocksparse
Efficient GPU kernels for block-sparse matrix multiplication and convolution
Stars: ✭ 797 (-1.85%)
Mutual labels:  cuda
Gunrock
High-Performance Graph Primitives on GPUs
Stars: ✭ 718 (-11.58%)
Mutual labels:  cuda
Marian
Fast Neural Machine Translation in C++
Stars: ✭ 777 (-4.31%)
Mutual labels:  cuda
Chainer
A flexible framework of neural networks for deep learning
Stars: ✭ 5,656 (+596.55%)
Mutual labels:  cuda
Warp Ctc
Pytorch Bindings for warp-ctc
Stars: ✭ 684 (-15.76%)
Mutual labels:  cuda
Accelerate
Embedded language for high-performance array computations
Stars: ✭ 751 (-7.51%)
Mutual labels:  cuda
Slang
Making it easier to work with shaders
Stars: ✭ 627 (-22.78%)
Mutual labels:  cuda
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (-2.71%)
Mutual labels:  cuda
Vexcl
VexCL is a C++ vector expression template library for OpenCL/CUDA/OpenMP
Stars: ✭ 626 (-22.91%)
Mutual labels:  cuda
Kintinuous
Real-time large scale dense visual SLAM system
Stars: ✭ 740 (-8.87%)
Mutual labels:  cuda
Scikit Cuda
Python interface to GPU-powered libraries
Stars: ✭ 803 (-1.11%)
Mutual labels:  cuda
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (-2.34%)
Mutual labels:  cuda
Ethereum nvidia miner
💰 USB flash drive ISO image for Ethereum, Zcash and Monero mining with NVIDIA graphics cards and Ubuntu GNU/Linux (headless)
Stars: ✭ 772 (-4.93%)
Mutual labels:  cuda

pytorch-loss

My implementation of label-smooth, amsoftmax, focal-loss, dual-focal-loss, triplet-loss, giou-loss, affinity-loss, pc_softmax_cross_entropy, ohem-loss(softmax based on line hard mining loss), large-margin-softmax(bmvc2019), lovasz-softmax-loss, and dice-loss(both generalized soft dice loss and batch soft dice loss). Maybe this is useful in my future work.

Also tried to implement swish, hard-swish(hswish) and mish activation functions.

Additionally, cuda based one-hot function is added (support label smooth).

Newly add an "Exponential Moving Average(EMA)" operator.

Add convolution ops, such as coord-conv2d, and dynamic-conv2d(dy-conv2d).

Some operators are implemented with pytorch cuda extension, so you need to compile it first:

    $ python setup.py install

After installing, now you can pick up what you need and use the losses or ops like one of thes:

from pytorch_loss import SwishV1, SwishV2, SwishV3
from pytorch_loss import HSwishV1, HSwishV2, HSwishV3
from pytorch_loss import MishV1, MishV2, MishV3
from pytorch_loss import convert_to_one_hot, convert_to_one_hot_cu, OnehotEncoder
from pytorch_loss import EMA

from pytorch_loss import TripletLoss
from pytorch_loss import SoftDiceLossV1, SoftDiceLossV2, SoftDiceLossV3
from pytorch_loss import PCSoftmaxCrossEntropyV1, PCSoftmaxCrossEntropyV2
from pytorch_loss import LargeMarginSoftmaxV1, LargeMarginSoftmaxV2, LargeMarginSoftmaxV3
from pytorch_loss import LabelSmoothSoftmaxCEV1, LabelSmoothSoftmaxCEV2, LabelSmoothSoftmaxCEV3
from pytorch_loss import generalized_iou_loss
from pytorch_loss import FocalLossV1, FocalLossV2, FocalLossV3
from pytorch_loss import Dual_Focal_loss
from pytorch_loss import GeneralizedSoftDiceLoss, BatchSoftDiceLoss
from pytorch_loss import AMSoftmax
from pytorch_loss import AffinityFieldLoss, AffinityLoss
from pytorch_loss import OhemCELoss, OhemLargeMarginLoss
from pytorch_loss import LovaszSoftmaxV1, LovaszSoftmaxV3
from pytorch_loss import TaylorCrossEntropyLoss

from pytorch_loss import TaylorSoftmax

from pytorch_loss import CoordConv2d, DY_Conv2d

Note that some losses or ops have 3 versions, like LabelSmoothSoftmaxCEV1, LabelSmoothSoftmaxCEV2, LabelSmoothSoftmaxCEV3, here V1 means the implementation with pure pytorch ops and use torch.autograd for backward computation, V2 means implementation with pure pytorch ops but use self-derived formula for backward computation, and V3 means implementation with cuda extension. Generally speaking, the V3 ops are faster and more memory efficient, since I have tried to squeeze everything in one cuda kernel function, which in most cases brings less overhead than a combination of pytorch ops.

For those who happen to find this repo, if you see errors in my code, feel free to open an issue to correct me.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].