All Projects → bostdiek → PublicWeaklySupervised

bostdiek / PublicWeaklySupervised

Licence: other
(Machine) Learning to Do More with Less

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to PublicWeaklySupervised

Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+2669.23%)
Mutual labels:  paper, weakly-supervised-learning
Yolo Powered robot vision
Stars: ✭ 133 (+923.08%)
Mutual labels:  paper, ipynb
DRL in CV Papers
Research publications in Computer Vision Journals and Conferences (and arxiv) using RL.Visit Website:
Stars: ✭ 31 (+138.46%)
Mutual labels:  paper
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
Stars: ✭ 79 (+507.69%)
Mutual labels:  paper
HacktoberFest 2021
Hacktoberfest 2021 contribution repository✨
Stars: ✭ 43 (+230.77%)
Mutual labels:  ipynb
Movecraft
The original movement plugin for Bukkit. Reloaded. Again.
Stars: ✭ 79 (+507.69%)
Mutual labels:  paper
fake-news-detection
This repo is a collection of AWESOME things about fake news detection, including papers, code, etc.
Stars: ✭ 34 (+161.54%)
Mutual labels:  paper
DeepBeerInventory-RL
The code for the SRDQN algorithm to train an agent for the beer game problem
Stars: ✭ 27 (+107.69%)
Mutual labels:  paper
MoCo
A pytorch reimplement of paper "Momentum Contrast for Unsupervised Visual Representation Learning"
Stars: ✭ 41 (+215.38%)
Mutual labels:  paper
maml-rl-tf2
Implementation of Model-Agnostic Meta-Learning (MAML) applied on Reinforcement Learning problems in TensorFlow 2.
Stars: ✭ 16 (+23.08%)
Mutual labels:  paper
paperback
Paper backup generator suitable for long-term storage.
Stars: ✭ 517 (+3876.92%)
Mutual labels:  paper
TMNet
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".
Stars: ✭ 77 (+492.31%)
Mutual labels:  paper
Islands
A spigot plugin for creating customisable home islands with different biomes. https://www.spigotmc.org/resources/islands-home-islands-system.84303/
Stars: ✭ 18 (+38.46%)
Mutual labels:  paper
CHyVAE
Code for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
Stars: ✭ 18 (+38.46%)
Mutual labels:  paper
Facial-Recognition-Attendance-System
An attendance system which uses facial recognition to detect which people are present in any image.
Stars: ✭ 48 (+269.23%)
Mutual labels:  paper
RIB
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)
Stars: ✭ 40 (+207.69%)
Mutual labels:  weakly-supervised-learning
paper-template
Collection of paper latex template for several computer vision related conference.
Stars: ✭ 63 (+384.62%)
Mutual labels:  paper
Hyperverse
A Minecraft world management plugin
Stars: ✭ 53 (+307.69%)
Mutual labels:  paper
WeFEND-AAAI20
Dataset for paper "Weak Supervision for Fake News Detection via Reinforcement Learning" published in AAAI'2020.
Stars: ✭ 67 (+415.38%)
Mutual labels:  weakly-supervised-learning
awesome-internals
A curated list of awesome resources and learning materials in the field of X internals
Stars: ✭ 78 (+500%)
Mutual labels:  paper

(Machine) Learning to Do More with Less

https://arxiv.org/abs/1706.09451

  • Timothy Cohen
  • Marat Freytsis
  • Bryan Ostdiek

Abstract

Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail -- both analytically and numerically -- with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC.

Notes

  • WeakSupervisionDemo.ipynb is a jupyter notebook that contains a quick demo on how to train weakly supervised networks with Keras. It also shows the toy model distributions used for our paper.

  • The 'Programs' directory holds a python script to generate data for the section of the paper on mislabeled data. There is also a jupyter notebook that covers all of the BSM section of the paper.

  • 'Functions' contains a sample generator for our bi-modal distributions. An example of how to use this function can be found in the WeakSupervisionDemo.ipynb notebook. The explicit definition of the weak cost function is also in this directory.

  • Lastly, the 'Data' folder contains both the BSM Monte Carlo data, saved Keras model weights, and the results of the mis-labeled data sets.

Please cite as

@article{Cohen:2017exh,
      author         = "Cohen, Timothy and Freytsis, Marat and Ostdiek, Bryan",
      title          = "{(Machine) Learning to Do More with Less}",
      journal        = "JHEP",
      volume         = "02",
      year           = "2018",
      pages          = "034",
      doi            = "10.1007/JHEP02(2018)034",
      eprint         = "1706.09451",
      archivePrefix  = "arXiv",
      primaryClass   = "hep-ph",
      SLACcitation   = "%%CITATION = ARXIV:1706.09451;%%"
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].