All Projects → jermwatt → Machine_learning_refined

jermwatt / Machine_learning_refined

Licence: other
Notes, examples, and Python demos for the textbook "Machine Learning Refined" (published by Cambridge University Press).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Machine learning refined

25daysinmachinelearning
I will update this repository to learn Machine learning with python with statistics content and materials
Stars: ✭ 53 (-92.93%)
Mutual labels:  jupyter-notebook, data-science, machine-learning-algorithms, numpy
Machine Learning From Scratch
Succinct Machine Learning algorithm implementations from scratch in Python, solving real-world problems (Notebooks and Book). Examples of Logistic Regression, Linear Regression, Decision Trees, K-means clustering, Sentiment Analysis, Recommender Systems, Neural Networks and Reinforcement Learning.
Stars: ✭ 42 (-94.4%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science, machine-learning-algorithms
Machine Learning With Python
Practice and tutorial-style notebooks covering wide variety of machine learning techniques
Stars: ✭ 2,197 (+192.93%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science, numpy
Ai Learn
人工智能学习路线图,整理近200个实战案例与项目,免费提供配套教材,零基础入门,就业实战!包括:Python,数学,机器学习,数据分析,深度学习,计算机视觉,自然语言处理,PyTorch tensorflow machine-learning,deep-learning data-analysis data-mining mathematics data-science artificial-intelligence python tensorflow tensorflow2 caffe keras pytorch algorithm numpy pandas matplotlib seaborn nlp cv等热门领域
Stars: ✭ 4,387 (+484.93%)
Mutual labels:  artificial-intelligence, data-science, numpy
Data Science Hacks
Data Science Hacks consists of tips, tricks to help you become a better data scientist. Data science hacks are for all - beginner to advanced. Data science hacks consist of python, jupyter notebook, pandas hacks and so on.
Stars: ✭ 273 (-63.6%)
Mutual labels:  jupyter-notebook, data-science, numpy
Datascience course
Curso de Data Science em Português
Stars: ✭ 294 (-60.8%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Deep Learning Book
Repository for "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python"
Stars: ✭ 2,705 (+260.67%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Sciblog support
Support content for my blog
Stars: ✭ 694 (-7.47%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Machine Learning For Trading
Code for Machine Learning for Algorithmic Trading, 2nd edition.
Stars: ✭ 4,979 (+563.87%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Thesemicolon
This repository contains Ipython notebooks and datasets for the data analytics youtube tutorials on The Semicolon.
Stars: ✭ 345 (-54%)
Mutual labels:  jupyter-notebook, data-science, numpy
Stats Maths With Python
General statistics, mathematical programming, and numerical/scientific computing scripts and notebooks in Python
Stars: ✭ 381 (-49.2%)
Mutual labels:  jupyter-notebook, data-science, numpy
Cryptocurrency Price Prediction
Cryptocurrency Price Prediction Using LSTM neural network
Stars: ✭ 271 (-63.87%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Notebooks Statistics And Machinelearning
Jupyter Notebooks from the old UnsupervisedLearning.com (RIP) machine learning and statistics blog
Stars: ✭ 270 (-64%)
Mutual labels:  jupyter-notebook, data-science, machine-learning-algorithms
Gdrl
Grokking Deep Reinforcement Learning
Stars: ✭ 304 (-59.47%)
Mutual labels:  artificial-intelligence, jupyter-notebook, numpy
Gophernotes
The Go kernel for Jupyter notebooks and nteract.
Stars: ✭ 3,100 (+313.33%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Csinva.github.io
Slides, paper notes, class notes, blog posts, and research on ML 📉, statistics 📊, and AI 🤖.
Stars: ✭ 342 (-54.4%)
Mutual labels:  artificial-intelligence, slides, data-science
Data Science
Collection of useful data science topics along with code and articles
Stars: ✭ 315 (-58%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Agile data code 2
Code for Agile Data Science 2.0, O'Reilly 2017, Second Edition
Stars: ✭ 413 (-44.93%)
Mutual labels:  jupyter-notebook, data-science, machine-learning-algorithms
Pba
Efficient Learning of Augmentation Policy Schedules
Stars: ✭ 461 (-38.53%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science
Pytorch Geometric Yoochoose
This is a tutorial for PyTorch Geometric on the YooChoose dataset
Stars: ✭ 198 (-73.6%)
Mutual labels:  artificial-intelligence, jupyter-notebook, data-science

Machine Learning Refined: Notes, Exercises, and Jupyter notebooks Tweet

Below you will find a range of resources that complement the 2nd edition of Machine Learning Refined (published by Cambridge University Press).

Table of Contents




A sampler of widgets and our pedagogy

(Back to top)

We believe mastery of a certain machine learning concept/topic is achieved only when the answer to each of the following three questions is affirmative.

  1. Intuition Can you describe the idea with a simple picture?
  2. Mathematical derivation Can you express your intuition in mathematical notation and derive underlying models/cost functions?
  3. Implementation Can you code up your derivations in a programming language, say Python, without using high-level libraries?

Intuition comes first. Intuitive leaps precede intellectual ones, and because of this we have included over 300 color illustrations in the book that have been meticulously designed to enable an intuitive grasp of technical concepts. Many of those illustrations are snapshots of animations that show convergence of certain algorithms, evolution of certain models from underfitting all the way to overfitting, etc. This sort of concepts can be illustrated and intuited best using animations (as opposed to static figures). You'll find a large number of such animations in this repository -- which you can modify yourself too via the raw Jupyter notebook version of these notes. Here are just a few examples:

Cross-validation (regression) Cross-validation (two-class classification) Cross-validation (multi-class classification)



K-means clustering Feature normalization Normalized gradient descent



Rotation Convexification Dogification!



A nonlinear transformation Weighted classification The moving average



Batch normalization Logistic regression



Polynomials vs. NNs vs. Trees (regression) Polynomials vs. NNs vs. Trees (classification)



Changing gradient descent's steplength (1d) Changing gradient descent's steplength (2d)



Convex combination of two functions Taylor series approximation



Feature selection via regularization Secant planes



Function approximation with a neural network A regression tree



Mathematical optimization: the workhorse of machine learning. We highly emphasize the importance of mathematical optimization in our treatment of machine learning. Optimization is the workhorse of machine learning and is fundamental at many levels – from the tuning of individual models to the general selection of appropriate nonlinearities via cross-validation. Because of this a strong understanding of mathematical optimization is requisite if one wishes to deeply understand machine learning, and if one wishes to be able to implement fundamental algorithms. Part I of the book provides a complete introduction to mathematical optimization, covering zero-, first-, and second-order methods, that are relied upon later in deriving and tuning machine learning models.

Learning by doing. We place significant emphasis on the design and implementation of algorithms throughout the text with implementations of fundamental algorithms given in Python. These fundamental examples can then be used as building blocks for the reader to help complete the text’s programming exercises, allowing them to ”get their hands dirty” and ”learn by doing,” practicing the concepts introduced in the body of the text. While in principle any programming language can be used to complete the text’s coding exercises, we highly recommend using Python for its ease of use and large support community. We also recommend using the open-source Python libraries NumPy, autograd, and matplotlib, as well as the Jupyter notebook editor to make implementing and testing code easier. A complete set of installation instructions, datasets, as well as starter notebooks can be found in this repository.

Online notes

(Back to top)

A select number of Chapters/Sections are highlighted below and are linked to HTML notes that served as early drafts for the second edition of the textbook. You can find these html files as well as Jupyter notebooks which created them in the notes subdirectory.


Chapter 1. Introduction to Machine Learning

1.1 Introduction
1.2 Distinguishing Cats from Dogs: a Machine Learning Approach
1.3 The Basic Taxonomy of Machine Learning Problems
1.4 Mathematical Optimization
1.5 Conclusion

Chapter 2. Zero-Order Optimization Techniques

2.1 Introduction
2.2 The Zero-Order Optimality Condition
2.3 Global Optimization Methods
2.4 Local Optimization Methods
2.5 Random Search
2.6 Coordinate Search and Descent
2.7 Conclusion
2.8 Exercises

Chapter 3. First-Order Optimization Techniques

3.1 Introduction
3.2 The First-Order Optimality Condition
3.3 The Geometry of First-Order Taylor Series
3.4 Computing Gradients Efficiently
3.5 Gradient Descent
3.6 Two Natural Weaknesses of Gradient Descent
3.7 Conclusion
3.8 Exercises

Chapter 4. Second-Order Optimization Techniques

4.1 The Second-Order Optimality Condition
4.2 The Geometry of Second-Order Taylor Series
4.3 Newton’s Method
4.4 Two Natural Weaknesses of Newton’s Method
4.5 Conclusion
4.6 Exercises

Chapter 5. Linear Regression

5.1 Introduction
5.2 Least Squares Linear Regression
5.3 Least Absolute Deviations
5.4 Regression Quality Metrics
5.5 Weighted Regression
5.6 Multi-Output Regression
5.7 Conclusion
5.8 Exercises
5.9 Endnotes

Chapter 6. Linear Two-Class Classification

6.1 Introduction
6.2 Logistic Regression and the Cross Entropy Cost
6.3 Logistic Regression and the Softmax Cost
6.4 The Perceptron
6.5 Support Vector Machines
6.6 Which Approach Produces the Best Results?
6.7 The Categorical Cross Entropy Cost
6.8 Classification Quality Metrics
6.9 Weighted Two-Class Classification
6.10 Conclusion
6.11 Exercises

Chapter 7. Linear Multi-Class Classification

7.1 Introduction
7.2 One-versus-All Multi-Class Classification
7.3 Multi-Class Classification and the Perceptron
7.4 Which Approach Produces the Best Results?
7.5 The Categorical Cross Entropy Cost Function
7.6 Classification Quality Metrics
7.7 Weighted Multi-Class Classification
7.8 Stochastic and Mini-Batch Learning
7.9 Conclusion
7.10 Exercises

Chapter 8. Linear Unsupervised Learning

8.1 Introduction
8.2 Fixed Spanning Sets, Orthonormality, and Projections
8.3 The Linear Autoencoder and Principal Component Analysis
8.4 Recommender Systems
8.5 K-Means Clustering
8.6 General Matrix Factorization Techniques
8.7 Conclusion
8.8 Exercises
8.9 Endnotes

Chapter 9. Feature Engineering and Selection

9.1 Introduction
9.2 Histogram Features
9.3 Feature Scaling via Standard Normalization
9.4 Imputing Missing Values in a Dataset
9.5 Feature Scaling via PCA-Sphering
9.6 Feature Selection via Boosting
9.7 Feature Selection via Regularization
9.8 Conclusion
9.9 Exercises

Chapter 10. Principles of Nonlinear Feature Engineering

10.1 Introduction
10.2 Nonlinear Regression
10.3 Nonlinear Multi-Output Regression
10.4 Nonlinear Two-Class Classification
10.5 Nonlinear Multi-Class Classification
10.6 Nonlinear Unsupervised Learning
10.7 Conclusion
10.8 Exercises

Chapter 11. Principles of Feature Learning

11.1 Introduction
11.2 Universal Approximators
11.3 Universal Approximation of Real Data
11.4 Naive Cross-Validation
11.5 Efficient Cross-Validation via Boosting
11.6 Efficient Cross-Validation via Regularization
11.7 Testing Data
11.8 Which Universal Approximator Works Best in Practice?
11.9 Bagging Cross-Validated Models
11.10 K-Fold Cross-Validation
11.11 When Feature Learning Fails
11.12 Conclusion
11.13 Exercises

Chapter 12. Kernel Methods

12.1 Introduction
12.2 Fixed-Shape Universal Approximators
12.3 The Kernel Trick
12.4 Kernels as Measures of Similarity
12.5 Optimization of Kernelized Models
12.6 Cross-Validating Kernelized Learners
12.7 Conclusion
12.8 Exercises

Chapter 13. Fully Connected Neural Networks

13.1 Introduction
13.2 Fully Connected Neural Networks
13.3 Activation Functions
13.4 The Backpropagation Algorithm
13.5 Optimization of Neural Network Models
13.6 Batch Normalization
13.7 Cross-Validation via Early Stopping
13.8 Conclusion
13.9 Exercises

Chapter 14. Tree-Based Learners

14.1 Introduction
14.2 From Stumps to Deep Trees
14.3 Regression Trees
14.4 Classification Trees
14.5 Gradient Boosting
14.6 Random Forests
14.7 Cross-Validation Techniques for Recursively Defined Trees
14.8 Conclusion
14.9 Exercises

Appendix A. Advanced First- and Second-Order Optimization Methods

A.1 Introduction
A.2 Momentum-Accelerated Gradient Descent
A.3 Normalized Gradient Descent
A.4 Advanced Gradient-Based Methods
A.5 Mini-Batch Optimization
A.6 Conservative Steplength Rules
A.7 Newton’s Method, Regularization, and Nonconvex Functions
A.8 Hessian-Free Methods

Appendix B. Derivatives and Automatic Differentiation

B.1 Introduction
B.2 The Derivative
B.3 Derivative Rules for Elementary Functions and Operations
B.4 The Gradient
B.5 The Computation Graph
B.6 The Forward Mode of Automatic Differentiation
B.7 The Reverse Mode of Automatic Differentiation
B.8 Higher-Order Derivatives
B.9 Taylor Series
B.10 Using the autograd Library

Appendix C. Linear Algebra

C.1 Introduction
C.2 Vectors and Vector Operations
C.3 Matrices and Matrix Operations
C.4 Eigenvalues and Eigenvectors
C.5 Vector and Matrix Norms


What is new in the second edition?

(Back to top)

The second edition of this text is a complete revision of our first endeavor, with virtually every chapter of the original rewritten from the ground up and eight new chapters of material added, doubling the size of the first edition. Topics from the first edition, from expositions on gradient descent to those on One-versusAll classification and Principal Component Analysis have been reworked and polished. A swath of new topics have been added throughout the text, from derivative-free optimization to weighted supervised learning, feature selection, nonlinear feature engineering, boosting-based cross-validation, and more. While heftier in size, the intent of our original attempt has remained unchanged: to explain machine learning, from first principles to practical implementation, in the simplest possible terms.

How to use the book?

(Back to top)

Example ”roadmaps” shown below provide suggested paths for navigating the text based on a variety of learning outcomes and university courses taught using the present book.

Recommended study roadmap for a course on the essentials of machine learning, including requisite chapters (left column), sections (middle column), and corresponding topics (right column). This essentials plan is suitable for time-constrained courses (in quarter-based programs and universities) or self-study, or where machine learning is not the sole focus but a key component of some broader course of study.




Recommended study roadmap for a full treatment of standard machine learning subjects, including chapters, sections, as well as corresponding topics to cover. This plan entails a more in-depth coverage of machine learning topics compared to the essentials roadmap given above, and is best suited for senior undergraduate/early graduate students in semester-based programs and passionate independent readers.




Recommended study roadmap for a course on mathematical optimization for machine learning and deep learning, including chapters, sections, as well as topics to cover.




Recommended study roadmap for an introductory portion of a course on deep learning, including chapters, sections, as well as topics to cover.




Technical prerequisites

(Back to top)

To make full use of the text one needs only a basic understanding of vector algebra (mathematical functions, vector arithmetic, etc.) and computer programming (for example, basic proficiency with a dynamically typed language like Python). We provide complete introductory treatments of other prerequisite topics including linear algebra, vector calculus, and automatic differentiation in the appendices of the text.

Coding exercises

(Back to top)

In the mlrefined_exercises directory you can find starting wrappers for coding exercises from the first and second editions of the text.

Resources for instructors

(Back to top)

Instructors may request a copy of this text for examination from the publisher's website. Cambridge University Press can also provide you with the solution manual to both editions of the text as well as a complete set of educational slides.

Errata

(Back to top)

Here you can find a regularly updated errata sheet for the second edition of the text. Please report any typos, bugs, broken links, etc., in the Issues Section of this repository or by contacting us directly via email (see contact section for more info).

Get a copy of the book

(Back to top)

Reviews and Endorsements

(Back to top)

An excellent book that treats the fundamentals of machine learning from basic principles to practical implementation. The book is suitable as a text for senior-level and first-year graduate courses in engineering and computer science. It is well organized and covers basic concepts and algorithms in mathematical optimization methods, linear learning, and nonlinear learning techniques. The book is nicely illustrated in multiple colors and contains numerous examples and coding exercises using Python.

John G. Proakis, University of California, San Diego

Some machine learning books cover only programming aspects, often relying on outdated software tools; some focus exclusively on neural networks; others, solely on theoretical foundations; and yet more books detail advanced topics for the specialist. This fully revised and expanded text provides a broad and accessible introduction to machine learning for engineering and computer science students. The presentation builds on first principles and geometric intuition, while offering real-world examples, commented implementations in Python, and computational exercises. I expect this book to become a key resource for students and researchers.

Osvaldo Simeone, King's College, London

This book is great for getting started in machine learning. It builds up the tools of the trade from first principles, provides lots of examples, and explains one thing at a time at a steady pace. The level of detail and runnable code show what's really going when we run a learning algorithm.

David Duvenaud, University of Toronto

This book covers various essential machine learning methods (e.g., regression, classification, clustering, dimensionality reduction, and deep learning) from a unified mathematical perspective of seeking the optimal model parameters that minimize a cost function. Every method is explained in a comprehensive, intuitive way, and mathematical understanding is aided and enhanced with many geometric illustrations and elegant Python implementations.

Kimiaki Sihrahama, Kindai University, Japan

Books featuring machine learning are many, but those which are simple, intuitive, and yet theoretical are extraordinary 'outliers'. This book is a fantastic and easy way to launch yourself into the exciting world of machine learning, grasp its core concepts, and code them up in Python or Matlab. It was my inspiring guide in preparing my 'Machine Learning Blinks' on my BASIRA YouTube channel for both undergraduate and graduate levels.

Islem Rekik, Director of the Brain And SIgnal Research and Analysis (BASIRA) Laboratory

Software installation and dependencies

(Back to top)

To successfully run the Jupyter notebooks contained in this repository we highly recommend downloading the Anaconda Python 3 distribution. Many of these notebooks also employ the Automatic Differentiator autograd which can be installed by typing the following command at your terminal

  pip install autograd

With minor adjustment users can also run these notebooks using the GPU/TPU extended version of autograd JAX.

Contact

(Back to top)

This repository is in active development by Jeremy Watt and Reza Borhani. Please do not hesitate to reach out with comments, questions, typos, etc.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].