CSE 291B. Unsupervised Learning

Grading Syllabus Notes Readings Presentations

Winter 2009
Professor: Lawrence Saul
Lectures: Tue/Thu 11:00 am - 12:20 pm
Location: Warren Lecture Hall 2209
Office hours: after class and/or by appointment.
Units: 2 or 4

Course Description

The lectures in this course will survey leading algorithms for unsupervised learning and high dimensional data analysis. The first part of the course will cover probabilistic generative models of high dimensional data, including Gaussian mixture models, factor analysis, nonnegative matrix factorization, exponential family PCA, probabilistic latent semantic analysis, and latent Dirichlet allocation. The second part of the course will cover spectral methods for dimensionality reduction, including multidimensional scaling, Isomap, maximum variance unfolding, locally linear embedding, graph Laplacian methods, spectral clustering, and kernel PCA. The third part of the course will consist of student presentations on related topics of interest.

Prerequisites

The course is aimed at graduate students in machine learning and related fields. Students should have earned a high grade in a previous, related course, such as CSE 250A, CSE 250B, ECE 271A, or ECE 271B. The course will be taught by lecture in the same style as CSE 250A, though at a more advanced mathematical level.

Grading

Students may take the course for either two or four units of credit. For two units, students will be required to give a 20-minute oral presentation on an assigned reading or research project. They may also be required to submit "scribe notes" (handwritten or typeset) on a small subset of lectures. For four units, students will additionally be required to complete occasional (but challenging) problem sets.

  • HW 1 (out Jan 29, due Feb 12)
  • HW 2 (out Mar 3, due Mar 17)

Syllabus

Tue Jan 6 Course overview. Review of clustering: k-means algorithm, Gaussian mixture modeling.
Thu Jan 8 Review of linear dimensionality reduction: principal component analysis, factor analysis.
Tue Jan 13 EM algorithms for factor analysis, principal component analysis, and mixture of factor analyzers.
Thu Jan 15 Nonnegative matrix factorization: cost functions and multiplicative updates.
Tue Jan 20 Nonnegative matrix factorization: auxiliary functions and proofs of convergence.
Thu Jan 22 Exponential family PCA.
Tue Jan 27 Random projection trees: guest lecture by Yoav Freund
Thu Jan 29 Document modeling: bag-of-words representation, probabilistic latent semantic indexing, Dirichlet models.
Tue Feb 3 Latent Dirichlet allocation.
Thu Feb 5 Variational approximations for inference.
Tue Feb 10 Singular value decomposition, low-rank matrix approximations, multidimensional scaling.
Thu Feb 12 Manifold learning, Isomap algorithm.
Tue Feb 17 Nystrom approximation; maximum variance unfolding (MVU).
Thu Feb 19 Spectral clustering, normalized cuts, graph partitioning.
Tue Feb 24 Laplacian eigenmaps, locally linear embedding (LLE).
Thu Feb 26 Low rank factorizations for MVU, kernel PCA; class evaluations.

Student Presentations

Tue Mar 3 Boyko Kakaradov, Josh Lewis, Cheng-Xian Li, Adam Koerner
Thu Mar 5 Aditya Menon, Tim Mullen, Matt Rodriguez, Andrea Vattani, Priya Velu
Tue Mar 10 Pouya Bozorgmehr, Heejin Choi, Eric Christiansen, Timothy Fair
Thu Mar 12 Kristen Jackie, Han Kim, Justin Ma, Kei Ma, Vicente Malave
Thu Mar 19 Peter Shin, Il-Young Son, Andrew Stout, Walter Talbott, Robert Thomas, Alex Tsiatas, Kai Wang, Wensong Xu

Readings

Probabilistic PCA

Nonnegative matrix factorization

Exponential family PCA

Document modeling

Multidimensional scaling and Nystrom approximation

Isomap and extensions

Maximum variance unfolding

Spectral clustering

Graph Laplacian methods

Locally linear embedding and related work

Kernel PCA