The goal of this class is to provide a broad introduction to machine-learning. The topics covered in this class include some topics in supervised learning, such as k-nearest neighbor classifiers, decision trees, boosting and perceptrons, and topics in unsupervised learning, such as k-means, and hierarchical clustering. The topics covered in this class will be different from those covered in CSE 150.
Students are expected to be very familiar with linear algebra and probability, and should be able to program in some language. In particular, I expect the students to know:
- Probability: Events, random variables, expectations, joint, conditional and marginal distributions, and independence.
- Linear Algebra: Vector spaces, subspaces, matrix inversion, matrix multiplication, linear independence, rank, determinants, orthonormality, basis, solving systems of linear equations.
- Calculus: Compute maxima and minima of common functions, and take derivatives and intergrals.
- Programming: We fully expect student to know how to write programs in some language. It can be a language of the student's choice, but no debugging help or programming support will be provided.
Homework 0 is a calibration homework which includes some (but not all) of the material that students are expected to know as a prerequisite.
CSE 150 is not a formal prerequisite, but taking it is a big plus!
There is no required text for this course. Slides or notes will be posted on the class website. We recommend the following textbooks for optional reading.
- Richard Duda, Peter Hart and David Stork, Pattern Classification, 2nd ed. John Wiley & Sons, 2001.
- Tom Mitchell, Machine Learning. McGraw-Hill, 1997.
- Michael Kearns and Umesh Vazirani, Introduction to Computational Learning Theory, MIT Press, 1997.
- Trevor Hastie, Robert Tibshirani and Jerome Friedman, The Elements of Statistical Learning. Springer, 2009