Date | Topic | Readings | Materials |
---|---|---|---|
09/10 | Course Overview | PRML 1.1, 1.2 | Lecture Slides |
09/15 | Classification: Evaluation and ROC Curves | PRML 1.5.1, Wikipedia | |
09/17 | Classification: Naive Bayes | PRML 4.2 | |
09/22 | Maximum Likelihood Estimation | PRML 1.2.4, 2.1, 2.2 | |
09/24 | Frequentist and Bayesian Estimation | PRML 1.2.3, 1.5 | |
09/29 | Bayesian Loss Functions, Dirichlet Priors | PRML 1.5, 2.1, 2.2 | Lecture Slides |
10/01 | K Nearest Neighbors, Cross-validation | PRML 1.3, 2.5 | Example: Color Constancy |
10/06 | Linear Regression: Maximum Likelihood | PRML 3.1.1, 3.1.2, 3.1.4 | Lecture Slides |
10/08 | Bayesian Regression, Multivariate Gaussians | PRML 2.3.1-2.3.4, 3.3 | Lecture Slides |
10/13 | Logistic Regression | PRML 4.3 | |
10/15 | Logistic Regression, Exponential Families | PRML 2.4, 4.3 | |
10/20 | Logistic Regression, Stochastic Gradient Descent | PRML 4.3, 5.2.4 | |
10/22 | MIDTERM | ||
10/27 | Regularized Stochastic Gradient, Perceptron Algorithm, Kernels | PRML 4.1.7, 6.1, 6.2 | Lecture Slides |
10/29 | Kernels, Gaussian Process Regression and Classification | PRML 6.2, 6.4.1, 6.4.2, 6.4.5 | |
11/03 | Clustering, K-Means | PRML 9.1 | |
11/05 | Maximum Entropy Review, K-Means | PRML 9.1 | |
11/10 | Rand Index, Expectation-Maximization Algorithm | PRML 9.2, Wikipedia | |
11/12 | Mixtures of Multinomials or Gaussians, EM Algorithm | PRML 9.2, 9.3 | Lecture Slides |
11/17 | EM Algorithm Theory, Hidden Markov Models | PRML 9.4, 13.1, 13.2 | Lecture Slides |
11/19 | HMMs: Viterbi, Forward-Backward, and EM Algorithms | PRML 13.2 | Lecture Slides |
11/24 | Monte Carlo: Importance Sampling, MCMC, Gibbs Sampling | PRML 11.1.1, 11.1.4, 11.2.1, 11.3 | |
12/01 | Principal Component Analysis, Probabilistic PCA | PRML 12.1.2, 12.1.3, 12.2 | |
12/03 | Probabilistic PCA, Factor Analysis, Directed Graphical Models | PRML 12.2.1, 12.2.2, 12.2.4, 8.1 |
Supplemental Exercises
For Bishop's Pattern Recognition and Machine Learning text, a number of exercises (marked with "WWW") have solutions available online. Many of these WWW problems test conceptual issues likely to show up on exams, particularly those in the following chapters:
- Chapter 3: Linear Regression
- Chapter 4: Logistic Regression and Maximum Entropy
- Chapter 9: Mixture Models and the EM Algorithm
- Chapter 13: Hidden Markov Models
Supplemental Readings
An electronic edition of another good machine learning textbook, The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman, is available online. They provide clear explanations for many topics, from a more statistical perspective. The following sections are particularly recommended for those who would like an alternative presentation of some topics:
- Chapter 2: Overview of Supervised Learning
- Section 3.2, 3.4: Linear Regression
- Section 4.4: Logistic Regression and Maximum Entropy
- Section 7.10: Cross-Validation
- Section 13.2, 13.3: K-Means and K-Nearest Neighbors
- Section 14.3: Clustering