All Projects → mackelab → machine-learning-I

mackelab / machine-learning-I

Licence: BSD-2-Clause license
Slides and lecture notes for the course 'machine learning I' taught at the Graduate School Neural Information Processing in Tuebingen.

Programming Languages

TeX
3793 projects
matlab
3953 projects

machine-learning-I

Slides and lecture notes for the course 'machine learning I' taught at the Graduate School Neural Information Processing in Tuebingen in the first half of the Winter-Semester 2012. The course is a one-semester, once weekly course for students studying for a Master's degree in Neural Information Processing at the University of Tuebingen. The online respository only includes slides and exercise for the the first half-semester of the course, which was focused on supervised learning. The second half of the course is focused on unsupervised learning, and was taught by Prof. Bethge. Please see the website of the graduate school Tuebingen for full details.

The course assumes some previous knowledge of linear algebra and probability theory, but includes a refresher to probability theory at the beginning. Being a short introductory course, it includes only very basic probabilistic algorithms for classification and regression.

Lecture notes were compiled by Patrick Putzky, [email protected].

Many of the figures in slides and in the lecture notes are taken from the Book Pattern Recognition and Machine Learning, which Chris Bishop kindly provides on his website. The book is also the primary recommended textbook for the course.

List of lectures and exercises

[to be added at the end]

Note: The material in lecture notes 2 and 3 (introduction to probability theory and Gaussian models) was originally added to the course after it had become apparent in the lecture on Bayesian inference that not all students had a sufficient background in probability theory for the course. The material in lecture 2 and 3 was originally taught AFTER the introduction to Bayesian inference-- for the repository, we sorted the slides in logical rather than chronological order, but this possibly has resulted in some statements in slides having become non-sensical or at least odd. In addition, the number of lectures is NOT consistent between the slides and the lecture notes.

Course material is provided 'as is', and includes multiple typos, sloppy citations, errors, inaccuracies and omissions. If you find any errors or want to make suggestions, please drop me an email or (better even) fixe them and issue a pull request for them to be included. Also make sure to add yourself to the list of contributors at the end of this document.

Original course description

Objective

The scientific discipline of “Machine learning” is concerned with developing and studying algorithms which can learn structure from data. Thus, it both provides important practical tools for data analysis as well as theoretical concepts for understanding how sensory systems can infer structure from empirical observations. This course will provide an introduction to important topics and algorithms in machine learning. A particular focus of this course will be on algorithms that have a clear statistical (and often Bayesian) interpretation.
We will cover both supervised algorithms (i.e. which try to learn an association between inputs and desired outputs) as well as unsupervised algorithms (which try to build up an internal model from inputs alone). The “supervised” learning component of the course will include various linear and nonlinear regression algorithms as well as linear discriminants, logistic regression and nonlinear classification algorithms. The “unsupervised” learning component of the course will include fundamental concepts and algorithms of dimensionality reduction, blind source separation, and clustering.

Learning targets

In this course, students will learn about important topics and techniques in machine learning, with a particular focus on probabilistic models. The course will cover supervised learning (linear regression algorithms, linear discriminants, logistic regression, nonlinear classification algorithms) and unsupervised learning (principal component analysis including several generalizations, k-means, mixture of Gaussians, Expectation-Maximization)

Prerequisites

Students should have a basic knowledge of linear algebra and probability theory. The exercise-sheets will involve some matlab-programming, so a basic familiarity with matlab would be advantageous.

Suggested Reading

Christopher M. Bishop (2007) Pattern Recognition And Machine Learning , Springer. Trevor Hastie, Robert Tibshirani, Jerome Friedman (2009) The Elements of Statistical Learning, Springer.

Contributors

  • Jakob Macke, MPI Biological Cybernetics www.mackelab.org (lecturer part 1)
  • Matthias Bethge, University of Tuebingen www.bethgelab.org (lecturer part 2)
  • Patrick Putzky, MPI Biological Cybernetics (compiled lecture notes)
  • Nicolas Ludolph, University of Tuebingen (TA on the course)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].