Lectures & Readings
Most course readings are taken from Machine Learning: A Probabilistic Perspective (MLaPP), a draft textbook in preparation by Prof. Kevin Murphy. The first chapter is freely available online. Later chapters will be distributed via a pair of readers available from the Metcalf Copy Center.
The specific schedule of topics and readings below is tentative, and will change as the course progresses.
Date | Topic | Primary Readings | Materials |
---|---|---|---|
1/26 | Course Overview | MLaPP: 1.1-1.3 | slides |
1/31 | Probability: Discrete random variables Dimensionality & model validation |
MLaPP: 2.1-2.3 MLaPP: 1.4.1-1.4.4, 8.3.8 |
slides |
2/02 | Maximum likelihood & Bayesian learning Naive Bayes classifiers |
MLaPP: 3.1-3.2 MLaPP: 5.1-5.2 |
slides |
2/07 | Probability: Continuous random variables Smoothing: Beta & Dirichlet priors |
MLaPP: 2.4, 2.5.4 MLaPP: 3.3-3.5 |
slides |
2/09 | Bayesian decision theory & ROCs Gaussian ML estimation |
MLaPP: 8.1-8.2, 8.3.4 MLaPP: 1.4.5-1.4.6 |
slides |
2/14 | Decision theory & continuous estimation Bayesian model selection Directed graphical models |
MLaPP: 8.2, 10.2 MLaPP: 1.4.7-1.4.9, 10.3 MLaPP: 9.1-9.2 |
slides |
2/16 | Multivariate Gaussian Distributions Gaussian Classification |
MLaPP: 2.5, 4.1-4.4.2 MLaPP: 5.3-5.3.1 |
slides |
2/21 | Brown Holiday: No Lecture | ||
2/23 | Linear Regression & Least Squares Bayesian Linear Regression |
MLaPP: 1.4.5-1.4.7 MLaPP: 6.2-6.3 |
slides |
2/28 | Gaussian Discriminant Analysis Logistic Regression, Probit Regression |
MLaPP: 5.3 MLaPP: 6.4, 7.4 |
slides |
3/01 | Logistic Regression Gradient Descent, Newton's Method |
MLaPP: 6.4 MLaPP: 6.4 |
slides |
3/06 | Logistic Regression: ML & MAP Laplace Approximations |
MLaPP: 6.4 MLaPP: 6.5 |
slides |
3/08 | Exponential Families | MLaPP: 7.1-7.2 | slides |
3/13 | Midterm Exam: In Class | ||
3/15 | Generalized Linear Models Robust Linear Regression Binary Feature Selection & Search |
MLaPP: 7.3 MLaPP: 6.2.3 MLaPP: 13.2 |
slides |
3/20 | L1 Regularization & Sparsity | MLaPP: 13.3-13.4 | slides |
3/22 | Online Learning & Perceptrons Kernel Methods |
MLaPP: 6.6 MLaPP: 14.2, 14.4 |
slides |
3/27 | Spring Break: No Lecture | ||
3/29 | Spring Break: No Lecture | ||
4/03 | Gaussian Process Regression Gaussian Process Classification, GLMs |
MLaPP: 15.1, 15.2 MLaPP: 15.3 |
slides |
4/05 | Margins & Support Vector Machines | MLaPP: 14.5 | slides |
4/10 | Clustering & K-Means Algorithm Probabilistic Mixture Models |
MLaPP: 1.3, 11.2 MLaPP: 11.2, 11.3 |
slides |
4/12 | Graphical Models EM for Mixture Models |
MLaPP: 9.1, 9.2, 9.4 MLaPP: 11.2-11.4 |
slides |
4/17 | Expectation Maximization Algorithm | MLaPP: 11.4 | slides |
4/19 | Principal Components Analysis Factor Analysis & Probabilistic PCA |
MLaPP: 12.1-12.3 | slides |
4/24 | EM Algorithm for Factor Analysis & PPCA | MLaPP: 12.1-12.3 | slides |
4/26 | Hidden Markov Models Inference & Learning for HMMs |
MLaPP: 17.1-17.3 MLaPP: 17.4-17.5 |
slides |
5/01 | Topic Models Monte Carlo Methods |
MLaPP: 27.3 MLaPP: 23.2, 23.4 |
slides |
5/03 | MCMC & Gibbs Samplers Continuous State Space Models |
MLaPP: 24.2 MLaPP: 18.1-18.3, 23.5 |
slides |
5/08 | Final Exam Review Session | slides 1 slides 2 |
|
5/10 | Graduate Project Presentations |
Recitations
Date | Topic | Readings | Materials |
---|---|---|---|
2/02 | Matlab Tutorial | YAGTOM | Matlab |
2/09 | Continuous Bayesian Estimation | MLaPP: 2.4, 3.3 | demo notes |
2/16 | Linear Algebra Tutorial | Stanford CS229 | notes |
2/23 | Multivariate Gaussians, Linear Regression | MLaPP: 4, 6.2-6.3 | demo notes |
3/01 | Continuous Optimization | MLaPP: 6.4 | demo notes |
3/08 | Midterm Review Session | MLaPP: 1-6, 8, 10 | slides 1 slides 2 |
3/15 | No Recitation | ||
3/22 | Lagrange Multipliers | Klein tutorial | notes |
3/29 | No Recitation | ||
4/05 | Kernels | MLaPP: 14 | notes |
4/12 | EM Algorithm | MLaPP: 11.4 | notes |
4/19 | Markov Chains | MLaPP: 17.2 | Matlab notes |
4/26 | Dynamic Programming, HMMs | MLaPP: 17.4 | notes |
5/03 | No Recitation |