All Projects → lancopku → well-classified-examples-are-underestimated

lancopku / well-classified-examples-are-underestimated

Licence: Apache-2.0 License
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
cython
566 projects

Projects that are alternatives of or similar to well-classified-examples-are-underestimated

trojanzoo
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Stars: ✭ 178 (+747.62%)
Mutual labels:  image-classification, adversarial-attacks
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (+14.29%)
Mutual labels:  transformer, classification
Conformer
Official code for Conformer: Local Features Coupling Global Representations for Visual Recognition
Stars: ✭ 345 (+1542.86%)
Mutual labels:  transformer, classification
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (+104.76%)
Mutual labels:  adversarial-attacks, graph-neural-networks
graphtrans
Representing Long-Range Context for Graph Neural Networks with Global Attention
Stars: ✭ 45 (+114.29%)
Mutual labels:  transformer, graph-neural-networks
Parametric-Contrastive-Learning
Parametric Contrastive Learning (ICCV2021)
Stars: ✭ 155 (+638.1%)
Mutual labels:  image-classification, imbalanced-learning
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (+9.52%)
Mutual labels:  transformer, classification
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+233.33%)
Mutual labels:  adversarial-attacks, graph-neural-networks
classification
Catalyst.Classification
Stars: ✭ 35 (+66.67%)
Mutual labels:  classification, image-classification
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (+233.33%)
Mutual labels:  transformer, image-classification
Transformer-in-Transformer
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (+90.48%)
Mutual labels:  transformer, image-classification
HRFormer
This is an official implementation of our NeurIPS 2021 paper "HRFormer: High-Resolution Transformer for Dense Prediction".
Stars: ✭ 357 (+1600%)
Mutual labels:  transformer, classification
Pro-GNN
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+861.9%)
Mutual labels:  adversarial-attacks, graph-neural-networks
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+176.19%)
Mutual labels:  transformer, adversarial-attacks
kaggle-champs
Code for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: ✭ 49 (+133.33%)
Mutual labels:  transformer, graph-neural-networks
TNCR Dataset
Deep learning, Convolutional neural networks, Image processing, Document processing, Table detection, Page object detection, Table classification. https://www.sciencedirect.com/science/article/pii/S0925231221018142
Stars: ✭ 37 (+76.19%)
Mutual labels:  classification, image-classification
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (+1704.76%)
Mutual labels:  transformer, classification
Nlp research
NLP research:基于tensorflow的nlp深度学习项目,支持文本分类/句子匹配/序列标注/文本生成 四大任务
Stars: ✭ 141 (+571.43%)
Mutual labels:  transformer, classification
ML4K-AI-Extension
Use machine learning in AppInventor, with easy training using text, images, or numbers through the Machine Learning for Kids website.
Stars: ✭ 18 (-14.29%)
Mutual labels:  classification, image-classification
FNet-pytorch
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
Stars: ✭ 204 (+871.43%)
Mutual labels:  transformer, image-classification

This is the repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

In this paper, we find that the cross-entropy loss hinders representation learning, energy optimization, and margin growth, and well-classified examples play a vital role to dealing with these issues. We support this finding by both theoretical analysis and empirical results.

You can find implementation and scripts (readme.sh) in the corresponding directory for each task.

Our modification is mainly around the el.py in each task.

We give the code for a conterexample (encouraging loss) below.

Example implementation

import torch
import torch.nn as nn
from torch.nn import functional as F

class EncouragingLoss(nn.Module):
    def __init__(self, log_end=0.75, reduction='mean'):
        super(EncouragingLoss, self).__init__()
        self.log_end = log_end  # 1 refers to the normal bonus, but 0.75 can easily work in existing optimization systems, 0.5 work for all settings we tested, recommend LE=0.75 for high accuracy scenarios and low LE for low accuracy scenarios.
        self.reduction = reduction

    def forward(self, input, target):
        lprobs = F.log_softmax(input)  # logp
        probs = torch.exp(lprobs)
        bonus = torch.log(torch.clamp((torch.ones_like(probs) - probs), min=1e-5))  # log(1-p)
        if self.log_end != 1.0:  # end of the log curve in conservative bonus 
            log_end = self.log_end
            y_log_end = torch.log(torch.ones_like(probs) - log_end)
            bonus_after_log_end = 1/(log_end - torch.ones_like(probs)) * (probs-log_end) + y_log_end
            bonus = torch.where(probs > log_end, bonus_after_log_end, bonus)
        loss = F.nll_loss(lprobs-bonus, target.view(-1), reduction=self.reduction)
        return loss

For the label smoothed version, you can refer to label_smoothed_encouraging_loss_fairseq.py

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].