All Projects → Kyubyong → Label_smoothing

Kyubyong / Label_smoothing

Licence: apache-2.0
Corrupted labels and label smoothing

Projects that are alternatives of or similar to Label smoothing

Midi Dataset
Code for creating a dataset of MIDI ground truth
Stars: ✭ 118 (-0.84%)
Mutual labels:  jupyter-notebook
Abstractive Text Summarization
PyTorch implementation/experiments on Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond paper.
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Trimap generator
Generating automatic trimap through pixel dilation and strongly-connected-component algorithms
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Deeplearning With Tensorflow Notes
龙曲良《TensorFlow深度学习》学习笔记及代码,采用TensorFlow2.0.0版本
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Pydatadc 2018 Tidy
PyData 2018 tutorial for tidying data
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Bayes By Backprop
PyTorch implementation of "Weight Uncertainty in Neural Networks"
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Tensorflow shiny
A R/Shiny app for interactive RNN tensorflow models
Stars: ✭ 118 (-0.84%)
Mutual labels:  jupyter-notebook
Defaultcreds Cheat Sheet
One place for all the default credentials to assist the Blue/Red teamers activities on finding devices with default password 🛡️
Stars: ✭ 1,949 (+1537.82%)
Mutual labels:  jupyter-notebook
Automunge
Artificial Learning, Intelligent Machines
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Voice activity detector
A statistical model-based Voice Activity Detection
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Nestedtensor
[Prototype] Tools for the concurrent manipulation of variably sized Tensors.
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Texture Synthesis Nonparametric Sampling
Implementation of "Texture Synthesis with Non-Parametric Sampling" paper by Alexei A. Efros and Thomas K. Leung
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Adversarial examples
对抗样本
Stars: ✭ 118 (-0.84%)
Mutual labels:  jupyter-notebook
Ds salary proj
Repo for the data science salary prediction of the Data Science Project From Scratch video on my youtube
Stars: ✭ 116 (-2.52%)
Mutual labels:  jupyter-notebook
2018 19 Classes
https://cc-mnnit.github.io/2018-19-Classes/ - 🎒 💻 Material for Computer Club Classes
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Senato.py
A scraper for the data made available by the Italian Senate, and a cluster analysis to detect similar amendments.
Stars: ✭ 118 (-0.84%)
Mutual labels:  jupyter-notebook
Kaggle challenge
This is the code for "Kaggle Challenge LIVE" By Siraj Raval on Youtube
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Topic Model Tutorial
Tutorial on topic models in Python with scikit-learn
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Mnet deepcdr
Code for TMI 2018 "Joint Optic Disc and Cup Segmentation Based on Multi-label Deep Network and Polar Transformation"
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (+0%)
Mutual labels:  jupyter-notebook

Noisy Labels and Label Smoothing

When we apply the cross-entropy loss to a classification task, we're expecting true labels to have 1, while the others 0. In other words, we have no doubts that the true labels are true, and the others are not. Is that always true? Maybe not. Many manual annotations are the results of multiple participants. They might have different criteria. They might make some mistakes. They are human, after all. As a result, the ground truth labels we have had perfect beliefs on are possible wrong.

One possibile solution to this is to relax our confidence on the labels. For instance, we can slighly lower the loss target values from 1 to, say, 0.9. And naturally we increase the target value of 0 for the others slightly as such. This idea is called label smoothing. Consult this for more information.

In this short project, I examine the effects of label smoothing when there're some noise. Concretly, I'd like to see if label smoothing is effective in a binary classification/labeling task where both labels are noisy or only one label is noisy.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].