The goal of this class is to provide a broad introduction to machine-learning. The topics covered in this class include some topics in supervised learning, such as k-nearest neighbor classifiers, decision trees, boosting and perceptrons, and topics in unsupervised learning, such as k-means, and hierarchical clustering.

The topics covered in this class will be different from those covered in CSE 150. In addition to the actual algorithms, we will be focussing on the principles behind the algorithms in this class.


There is no required text for this course. Slides or notes will be posted on the class website. We recommend the following textbooks for optional reading.

  • Richard Duda, Peter Hart and David Stork, Pattern Classification, 2nd ed. John Wiley & Sons, 2001.

  • Tom Mitchell, Machine Learning. McGraw-Hill, 1997.

  • Michael Kearns and Umesh Vazirani, Introduction to Computational Learning Theory, MIT Press, 1997.

  • Trevor Hastie, Robert Tibshirani and Jerome Friedman, The Elements of Statistical Learning. Springer, 2009