CSE 250A. Principles of Artificial Intelligence:
Probabilistic methods for reasoning and decision-making under uncertainty. Topics include: inference and learning in directed probabilistic graphical models; prediction and planning in Markov decision processes; applications to computer vision, robotics, speech recognition, natural language processing, and information retrieval.
The course is aimed broadly at advanced undergraduates and beginning graduate students in mathematics, science, and engineering. Prerequisites are elementary probability, multivariable calculus, linear algebra, and basic programming ability in some high-level language such as Python, Matlab, R, Julia, Java, or C. Programming assignments are completed in the language of the student's choice.
CSE 250a covers largely the same topics as CSE 150a, but at a faster pace and more advanced mathematical level. The homework assignments and exams in CSE 250A are also longer and more challenging. In general you should not take CSE 250a if you have already taken CSE 150a.
|Thu Sep 23||Administrivia and course overview.|
|Tue Sep 28||Modeling uncertainty, review of probability, explaining away.||HW 1 out.|
|Thu Sep 30||Belief networks: from probabilities to graphs.|
|Tue Oct 05||Representing conditional probability tables. Conditional independence and d-separation.||HW 1 due.
HW 2 out.
|Thu Oct 07||Probabilistic inference in polytrees.|
|Tue Oct 12||More algorithms for inference: node clustering, cutset conditioning, likelihood weighting.||HW 2 due.
HW 3 out.
|Thu Oct 14||Markov Chain Monte Carlo algorithms for inference. Learning from complete data.|
|Tue Oct 19||Maximum likelihood estimation. Markov models of language. Naive Bayes models of text.||HW 3 due.
HW 4 out.
|Thu Oct 21||Linear regression and least squares. Detour on numerical optimization.|
|Tue Oct 26||Logistic regression, gradient descent, Newton's method. Learning from incomplete data.||HW 4 due.
HW 5 out.
|Thu Oct 28||EM algorithm for discrete belief networks: derivation and proof of convergence.|
|Tue Nov 02||EM algorithms for word clustering and linear interpolation.||HW 5 due.
HW 6 out.
|Thu Nov 04||EM algorithms for noisy-OR and matrix completion. Discrete hidden Markov models.|
|Tue Nov 09||Computing likelihoods and Viterbi paths in hidden Markov models.||HW 6 due.
HW 7 out.
|Wed Nov 10||Make-up lecture. Forward-backward algorithm in HMMs. Gaussian mixture models.|
|Thu Nov 11||Veterans Day holiday.|
|Tue Nov 16||Linear dynamical systems. Reinforcement learning and Markov decision processes.||HW 7 due.
HW 8 out.
|Thu Nov 18||State and action value functions, Bellman equations, policy evaluation, greedy policies.|
|Tue Nov 23||Policy improvement and policy iteration.
Value iteration. Algorithm demos.
|HW 8 due.
HW 9 out.
|Thu Nov 25||Thanksgiving holiday.|
|Tue Nov 30||Convergence of value iteration. Model-free algorithms. Temporal difference prediction.|
|Thu Dec 02||Q-learning, RL in large state spaces.
Bonus topics. Course wrap-up.
|HW 9 due.|
|Mon Dec 06||Remote (take-home) final exam.|