All Projects → mblondel → Fenchel Young Losses

mblondel / Fenchel Young Losses

Probabilistic classification in PyTorch/TensorFlow/scikit-learn with Fenchel-Young losses

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Fenchel Young Losses

Ml code
A repository for recording the machine learning code
Stars: ✭ 75 (-50.66%)
Mutual labels:  sklearn
Python Flask Sklearn Docker Template
A simple example of python api for real time machine learning, using scikit-learn, Flask and Docker
Stars: ✭ 117 (-23.03%)
Mutual labels:  sklearn
Machine Learning Projects
This repository consists of all my Machine Learning Projects.
Stars: ✭ 135 (-11.18%)
Mutual labels:  sklearn
Tutorials
机器学习相关教程
Stars: ✭ 9,616 (+6226.32%)
Mutual labels:  sklearn
Ai Chatbot Framework
A python chatbot framework with Natural Language Understanding and Artificial Intelligence.
Stars: ✭ 1,564 (+928.95%)
Mutual labels:  sklearn
Openuba
A robust, and flexible open source User & Entity Behavior Analytics (UEBA) framework used for Security Analytics. Developed with luv by Data Scientists & Security Analysts from the Cyber Security Industry. [PRE-ALPHA]
Stars: ✭ 127 (-16.45%)
Mutual labels:  sklearn
Mlatimperial2017
Materials for the course of machine learning at Imperial College organized by Yandex SDA
Stars: ✭ 71 (-53.29%)
Mutual labels:  sklearn
Data Analysis
主要是爬虫与数据分析项目总结,外加建模与机器学习,模型的评估。
Stars: ✭ 142 (-6.58%)
Mutual labels:  sklearn
House Price Prediction
房价预测完整项目:1.爬取链家网数据 2.处理后,用sklearn中几个逻辑回归机器学习模型和keras神经网络搭建模型预测房价 最终结果神经网络效果更好,R^2值0.75左右
Stars: ✭ 116 (-23.68%)
Mutual labels:  sklearn
Role2vec
A scalable Gensim implementation of "Learning Role-based Graph Embeddings" (IJCAI 2018).
Stars: ✭ 134 (-11.84%)
Mutual labels:  sklearn
Skpro
Supervised domain-agnostic prediction framework for probabilistic modelling
Stars: ✭ 107 (-29.61%)
Mutual labels:  sklearn
Facial Expression Recognition Svm
Training SVM classifier to recognize people expressions (emotions) on Fer2013 dataset
Stars: ✭ 110 (-27.63%)
Mutual labels:  sklearn
Ds Ai Tech Notes
📖 [译] 数据科学和人工智能技术笔记
Stars: ✭ 131 (-13.82%)
Mutual labels:  sklearn
Machinelearningalgorithm
一些常用的机器学习算法实现
Stars: ✭ 84 (-44.74%)
Mutual labels:  sklearn
Qlik Py Tools
Data Science algorithms for Qlik implemented as a Python Server Side Extension (SSE).
Stars: ✭ 135 (-11.18%)
Mutual labels:  sklearn
Karateclub
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)
Stars: ✭ 1,190 (+682.89%)
Mutual labels:  sklearn
Aws Machine Learning University Accelerated Nlp
Machine Learning University: Accelerated Natural Language Processing Class
Stars: ✭ 1,695 (+1015.13%)
Mutual labels:  sklearn
Mlmodels
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...
Stars: ✭ 145 (-4.61%)
Mutual labels:  sklearn
Ml Cheatsheet
A constantly updated python machine learning cheatsheet
Stars: ✭ 136 (-10.53%)
Mutual labels:  sklearn
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-13.16%)
Mutual labels:  sklearn

.. -- mode: rst --

Fenchel-Young losses

This package implements loss functions useful for probabilistic classification. More specifically, it provides

  • drop-in replacements for PyTorch loss functions
  • drop-in replacements for TensorFlow loss functions
  • scikit-learn compatible classifiers

The package is based on the Fenchel-Young loss framework [1,2,3].

.. image:: examples/tsallis.png :alt: Tsallis losses :align: center

Notice from the center plot that sparsemax and Tsallis are able to produce exactly zero (sparse) probabilities unlike the logistic (softmax) loss.

Supported Fenchel-Young losses

  • Multinomial logistic loss
  • One-vs-all logistic loss
  • Sparsemax loss (sparse probabilities!)
  • Tsallis losses (sparse probabilities!)

Sparse means that some classes have exactly zero probability, i.e., these classes are irrelevant.

Tsallis losses are a family of losses parametrized by a positive real value α. They recover the multinomial logistic loss with α=1 and the sparsemax loss with α=2. Values of α between 1 and 2 enable to interpolate between the two losses.

In all losses above, the ground-truth can either be a n_samples 1d-array of label integers (each label should be between 0 and n_classes-1) or a n_samples x n_classes 2d-array of label proportions (each row should sum to 1).

Examples

scikit-learn compatible classifier:

.. code-block:: python

import numpy as np from sklearn.datasets import make_classification from fyl_sklearn import FYClassifier

X, y = make_classification(n_samples=10, n_features=5, n_informative=3, n_classes=3, random_state=0) clf = FYClassifier(loss="sparsemax") clf.fit(X, y) print(clf.predict_proba(X[:3]))

Drop-in replacement for PyTorch losses:

.. code-block:: python

import torch from fyl_pytorch import SparsemaxLoss

integers between 0 and n_classes-1, shape = n_samples

y_true = torch.tensor([0, 2])

model scores, shapes = n_samples x n_classes

theta = torch.tensor([[-2.5, 1.2, 0.5], [2.2, 0.8, -1.5]]) loss = SparsemaxLoss()

loss value (caution: reversed convention compared to numpy and tensorflow)

print(loss(theta, y_true))

predictions (probabilities) are stored for convenience

print(loss.y_pred)

can also recompute them from theta

print(loss.predict(theta))

label proportions are also allowed

y_true = torch.tensor([[0.8, 0.2, 0], [0.1, 0.2, 0.7]]) print(loss(theta, y_true))

Drop-in replacement for tensorflow losses:

.. code-block:: python

import tensorflow as tf from fyl_tensorflow import sparsemax_loss, sparsemax_predict

integers between 0 and n_classes-1, shape = n_samples

y_true = tf.constant([0, 2])

model scores, shapes = n_samples x n_classes

theta = tf.constant([[-2.5, 1.2, 0.5], [2.2, 0.8, -1.5]])

loss value

print(sparsemax_loss(y_true, theta))

predictions (probabilities)

print(sparsemax_predict(theta))

label proportions are also allowed

y_true = tf.constant([[0.8, 0.2, 0], [0.1, 0.2, 0.7]]) print(sparsemax_loss(y_true, theta))

Installation

Simply copy relevant files to your project.

References

.. [1] SparseMAP: Differentiable Sparse Structured Inference. Vlad Niculae, André F. T. Martins, Mathieu Blondel, Claire Cardie. In Proc. of ICML 2018. [arXiv <https://arxiv.org/abs/1802.04223>_]

.. [2] Learning Classifiers with Fenchel-Young Losses: Generalized Entropies, Margins, and Algorithms. Mathieu Blondel, André F. T. Martins, Vlad Niculae. In Proc. of AISTATS 2019. [arXiv <https://arxiv.org/abs/1805.09717>_]

.. [3] Learning with Fenchel-Young Losses. Mathieu Blondel, André F. T. Martins, Vlad Niculae. Preprint. [arXiv <https://arxiv.org/abs/1901.02324>_]

Author

  • Mathieu Blondel, 2018
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].