The lectures in this course will survey leading algorithms for unsupervised learning and high dimensional data analysis. The first part of the course will cover probabilistic/generative models of high dimensional data, such as Gaussian mixture models, factor analysis, nonnegative matrix factorization, exponential family PCA, probabilistic latent semantic analysis, latent Dirichlet allocation, independent component analysis, and deep neural networks. The second part of the course will cover spectral methods for dimensionality reduction, including multidimensional scaling, Isomap, maximum variance unfolding, locally linear embedding, graph Laplacian methods, spectral clustering, and kernel PCA. Some lecture notes here.
The course is aimed at graduate students in machine learning and related fields. Students should have earned a high grade in a previous, related course, such as CSE 250A, CSE 250B, ECE 271A, or ECE 271B. The course will be taught by lecture in the same style as CSE 250A, though at a more advanced mathematical level. Enrollment is by permission of the instructor.
There will be three homework assignments (60-75%) and a final course project (25-40%). Students will give brief presentations on their course projects during the last two weeks of class.
Each student will have ten minutes to present their project and answer 1-2 brief questions. (Students working in pairs will have up to twenty minutes.) No write-up of the project is required, but each student must submit a hard-copy of the slides. The grading criteria are simple:
Wed May 30
Mon Jun 04
Wed Jun 06