All Projects → henry8527 → Cot

henry8527 / Cot

[ICLR'19] Complement Objective Training

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cot

Notes
The notes for Math, Machine Learning, Deep Learning and Research papers.
Stars: ✭ 53 (-24.29%)
Mutual labels:  optimization
Prioritizr
Systematic conservation prioritization in R
Stars: ✭ 62 (-11.43%)
Mutual labels:  optimization
Powa Web
PoWA user interface
Stars: ✭ 66 (-5.71%)
Mutual labels:  optimization
Complementarity.jl
provides a modeling interface for mixed complementarity problems (MCP) and math programs with equilibrium problems (MPEC) via JuMP
Stars: ✭ 54 (-22.86%)
Mutual labels:  optimization
One Pixel Attack Keras
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet
Stars: ✭ 1,097 (+1467.14%)
Mutual labels:  cifar10
Robust Adv Malware Detection
Code repository for the paper "Adversarial Deep Learning for Robust Detection of Binary Encoded Malware"
Stars: ✭ 63 (-10%)
Mutual labels:  optimization
Gd Uap
Generalized Data-free Universal Adversarial Perturbations
Stars: ✭ 50 (-28.57%)
Mutual labels:  optimization
Awesome Robotics Libraries
😎 A curated list of robotics libraries and software
Stars: ✭ 1,159 (+1555.71%)
Mutual labels:  optimization
Ali Pytorch
PyTorch implementation of Adversarially Learned Inference (BiGAN).
Stars: ✭ 61 (-12.86%)
Mutual labels:  cifar10
Bayesiantools
General-Purpose MCMC and SMC Samplers and Tools for Bayesian Statistics
Stars: ✭ 66 (-5.71%)
Mutual labels:  optimization
Pc Optimization Hub
collection of various resources devoted to performance and input lag optimization
Stars: ✭ 55 (-21.43%)
Mutual labels:  optimization
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-18.57%)
Mutual labels:  optimization
Tiny Site
图片优化
Stars: ✭ 65 (-7.14%)
Mutual labels:  optimization
Dsp
An open-source parallel optimization solver for structured mixed-integer programming
Stars: ✭ 53 (-24.29%)
Mutual labels:  optimization
Spirit
Atomistic Spin Simulation Framework
Stars: ✭ 67 (-4.29%)
Mutual labels:  optimization
Sanic.js
JS Gotta go fast ! | Increase native JS functions performances
Stars: ✭ 50 (-28.57%)
Mutual labels:  optimization
Better Firebase Functions
This repo provides functionality for a better way of organising files, imports and function triggers in Firebase Cloud Functions
Stars: ✭ 63 (-10%)
Mutual labels:  optimization
Label Embedding Network
Label Embedding Network
Stars: ✭ 69 (-1.43%)
Mutual labels:  cifar10
Frost Dev
Fast Robot Optimization and Simulation Toolkit (FROST)
Stars: ✭ 67 (-4.29%)
Mutual labels:  optimization
Pyribs
A bare-bones Python library for quality diversity optimization.
Stars: ✭ 66 (-5.71%)
Mutual labels:  optimization

Complement Objective Training

Overview

This repository contains the PyTorch implementation of Complement Objective Training introduced in the following paper:

COMPLEMENT OBJECTIVE TRAINING.
Hao-Yun Chen, Pei-Hsin Wang, Chun-Hao Liu, Shih-Chieh Chang, Jia-Yu Pan, Yu-Ting Chen, Wei Wei, Da-Cheng Juan.
https://openreview.net/forum?id=HyM7AiA5YX

Introduction

Complement Objective Training is a new training paradigm that updates neural network parameters by alternating iteratively between the primary objective and the complement objective. Conventionally, training with cross entropy as the primary objective aims at maximizing the predicted probability of the ground-truth class, while we propose “complement entropy” as the complement objective for neutralizing the predicted probabilities of the complement classes. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks.

Dependencies

  • Python 3.6
  • Pytorch 0.4.1

Usage

For getting baseline results

python main.py --sess Baseline_session

For training via Complement objective

python main.py --COT --sess COT_session

Benchmark on CIFAR10

The following table shows the best test errors in a 200-epoch training session. (Please refer to Figure 3a in the paper for details.)

Model Baseline COT
PreAct ResNet-18 5.46% 4.86%

Citation

If you find this work useful in your research, please cite:

@inproceedings{
chen2018complement,
title={Complement Objective Training},
author={Hao-Yun Chen and Pei-Hsin Wang and Chun-Hao Liu and Shih-Chieh Chang and Jia-Yu Pan and Yu-Ting Chen and Wei Wei and Da-Cheng Juan},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=HyM7AiA5YX},
}

Acknowledgement

The CIFAR-10 reimplementation of COT is adapted from the pytorch-cifar repository by kuangliu.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].