CSE 250A. Principles of Artificial Intelligence:
|
Administration Content Prerequisites Textbooks Syllabus Gradesource Lectures |
![]() |
![]() |
![]() |
![]() |
There will be a quiz in class every Tuesday starting on
October 9, 2012. The last two quizzes will be on
Thursdays, on November 29 and December 6. In total there
will be 9 quizzes. The lowest quiz grade will be
discarded, so one quiz may be missed without penalty.
There will be eight homework assignments. The first six
will be handed out on Thursdays, starting on October 4,
and due back at the start of lecture the following
Thursday. Quizzes and assignments will both be done in
pairs; please pay attention to detailed instructions.
Please ask questions using Piazza.
Methods based on probability theory for reasoning and learning under uncertainty. Topics will include directed and undirected probabilistic graphical models, exact and approximate inference, latent variables, expectation-maximization, hidden Markov models, Markov decision processes, applications to vision, robotics, speech, and/or text.
The course is aimed primarily at first-year graduate students in mathematics, science, and engineering. Prerequisites are elementary probability, multivariable calculus, linear algebra, and basic programming ability in a high-level language such as C, Java, R, or Matlab. Programming assignments are completed in the language of the student's choice.
CSE 150
covers some of the same material as 250A, but at a slower
pace and less advanced mathematical level. The homework
assignments in CSE 250A are longer and more challenging.
CSE 250B
is at the same level as 250A, but has different content
and style. Students may take either or both of 250A and
250B, in any order.
Lecture notes will be linked to the table below, as PDF
files. The notes for one day may be in the PDF file for an
earlier day.
October 2 |
Overview of the course, intro
to Bayesian networks |
|
October 4 |
Laws of probability theory, Bayesian
network for the earthquake scenario |
First homework assignment
distributed |
October 8 section notes |
||
October 9 |
Simpson's paradox, explaining away,
formal definition of a Bayesian network |
Quiz 1 |
October 11 |
Conditional probability tables,
logistic regression, d-separation |
Second homework assignment
distributed |
October 15 section notes |
||
October 16 |
Independence as absence of
information flow, examples of d-separation |
Quiz 2 |
October 18 |
Algorithm for computing p(X|E)
in polytree networks |
Third homework
assignment distributed |
October 22 section notes coming soon |
||
October 23 |
Merging nodes and cutset conditioning
for inference in loopy networks |
Quiz 3 |
October 24 section notes on
inference via stochastic sampling |
||
October 25 |
Principle
of maximum likelihood (ML). ML learning of
parameters for a Bayesian network. |
Fourth assignment |
October 26 section notes |
||
October 30 |
Markov models of sentences, linear
regression viewed as a Bayesian network. |
Quiz 4 |
November 1 |
Expectation-maximization (EM) |
Fifth assignment |
November 6 |
EM for Bayesian networks.
Context-based language model. |
Quiz 5 |
November 8 |
21st century data analysis and the
election. EM for context-based language models and
for mixture models. |
Sixth assignment |
Section notes on EM
and mixture models for Friday November 9 and
Monday November 12 |
||
November 13 |
Hidden Markov
models (HMMs). |
Quiz 6 |
November 15 |
Forward algorithm and Viterbi
algorithm. |
Seventh assignment,
due on November 27 |
Section notes on HMM algorithms for
Nov. 16 |
||
November 20 |
EM training of HMMs. Linear dynamical
systems. |
Quiz 7 |
November 22 |
Thanksgiving: No class. | |
November 27 |
Reinforcement learning
(RL). |
Last assignment, due on
December 6 |
November 29 |
Policy iteration, restricted linear
value functions. |
Quiz 8 |
Section notes on MDPs and RL for Nov.
30 |
||
December 4 |
Approximate policy evaluation. |
|
December 6 |
Least-squares policy iteration. |
Quiz 9 |
December 12 |
Wednesday from 11:30am to 2:30pm: Final exam. |