All Projects → ajcr → em-explanation

ajcr / em-explanation

Licence: MIT license
Notebooks explaining the intuition behind the Expectation Maximisation algorithm

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to em-explanation

machine-learning
Python machine learning applications in image processing, recommender system, matrix completion, netflix problem and algorithm implementations including Co-clustering, Funk SVD, SVD++, Non-negative Matrix Factorization, Koren Neighborhood Model, Koren Integrated Model, Dawid-Skene, Platt-Burges, Expectation Maximization, Factor Analysis, ISTA, F…
Stars: ✭ 91 (+184.38%)
Mutual labels:  expectation-maximization
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (+103.13%)
Mutual labels:  expectation-maximization
ml-simulations
Animated Visualizations of Popular Machine Learning Algorithms
Stars: ✭ 33 (+3.13%)
Mutual labels:  expectation-maximization
AROS
www.axrt.org
Stars: ✭ 33 (+3.13%)
Mutual labels:  intuition
Text-Analysis
Explaining textual analysis tools in Python. Including Preprocessing, Skip Gram (word2vec), and Topic Modelling.
Stars: ✭ 48 (+50%)
Mutual labels:  expectation-maximization
Fast-Dawid-Skene
Code for the algorithms in the paper: Vaibhav B Sinha, Sukrut Rao, Vineeth N Balasubramanian. Fast Dawid-Skene: A Fast Vote Aggregation Scheme for Sentiment Classification. KDD WISDOM 2018
Stars: ✭ 35 (+9.38%)
Mutual labels:  expectation-maximization
models-by-example
By-hand code for models and algorithms. An update to the 'Miscellaneous-R-Code' repo.
Stars: ✭ 43 (+34.38%)
Mutual labels:  expectation-maximization

Expectation Maximisation

Expectation Maximisation gives us a way to estimate model parameters when we are faced with hidden variables. For example, we can estimate distribution parameters for groups of points when we don't know which point belongs to which group:

alt text

Notebooks

These are notebooks explaining the intuition behind the Expectation Maximisation algorithm:

  • em-notebook-1 - given sequences of coin flips from two coins, but not knowing which coin generated which sequence, compute each coin's bias towards heads.
  • em-notebook-2 - given two equal-size groups of data points drawn from two normal distributions, but not knowing which point came from which group, compute the mean and standard deviation of the distributions.

Expectation Maximisation (EM) is often used to find estimates for model parameters when some variables or data points are not observable. It is an iterative method which allows estimates for a model paramter to be update and ultimately converge to a local (not necessarily global) maximum liklihood value. It is used extensively in finance, bioinformatics and other diciplines, where the central ideas behind the algorithm appear under different guises.

These notebooks are hopefully a source of simple explanations and easy-to-understand code that the reader can tinker with until the basic concept of EM "clicks". They are not intended as a rigorous treatment of the algorithm and its properties (see the Resources section below for other links).


Other Resources

You may find some of the following resources useful if you want to read more about EM:


Future

I hope to extend the notebooks to show how other model parameters can be estimated using EM.

I'd also like to add further notebooks explaining how EM is used in real-world applications. For example, transcript quantification with Kallisto.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].