The goal of this class is to provide a broad introduction to machine-learning. The topics covered in this class include some topics in supervised learning, such as k-nearest neighbor classifiers, decision trees, boosting and perceptrons, and topics in unsupervised learning, such as k-means, PCA and Gaussian mixture models. We will also look at some basics of learning theory. The topics covered in this class will be different from those covered in CSE 150.
Students are expected to have some familiarity with linear algebra and probability, and should be able to program in some language. Taking CSE 150 is not a prerequisite, but a big plus!
There is no required text for this course. Slides or notes will be posted on the class website. We recommend the following textbooks for optional reading.
- Richard Duda, Peter Hart and David Stork, Pattern Classification, 2nd ed. John Wiley & Sons, 2001.
- Tom Mitchell, Machine Learning. McGraw-Hill, 1997.
- Michael Kearns and Umesh Vazirani, Introduction to Computational Learning Theory, MIT Press, 1997.
- Trevor Hastie, Robert Tibshirani and Jerome Friedman, The Elements of Statistical Learning. Springer, 2009