CSE 259: UCSD AI Seminar, Fall 2017

Term: Fall Qtr 2017
Time: Monday 12-1pm, EBU3B 4140

In this seminar, local and external speakers present their research. Fall 2017 seminars are coordinated by Ndapa Nakashole.


Week Date Speaker Affiliation
1 October 2 Vitor Carvalho Lead Research Scientist, Snap Research
2 October 9 Kai-Wei Chang UCLA
3 October 16 Andrew Kahng UCSD CSE & ECE
4 October 23 Oren Etzioni CEO, Allen Institute for Artificial Intelligence (AI2)
5 October 30 Angela Yu UCSD CogSci
6 November 6 Russell Impagliazzo UCSD CSE
7 November 13 Shuai Tang UCSD CogSci
8 November 20 Michael Yip UCSD ECE
9 November 27 Haipeng Luo USC
10 December 4 Chun-Nan Hsu UCSD Bioinformatics

Week 1: Vitor Carvalho, Snap Research

Personalized Neural Conversation Models and other research projects at Snapchat
In this talk we will start by briefly overviewing some of the projects currently under consideration at Snap Research. Then we will focus on recent advances in Deep Learning that have sparked an interest in modeling language, particularly for personalized conversational agents that can retain contextual information during dialog exchanges. We explore and compare several of the recently proposed neural conversation models, and carry out an evaluation of the multiple factors that can affect predictive performance. Based on the tradeoffs of different models, we propose a new neural generative dialog model conditioned on speakers as well as context history that outperforms previous models on both retrieval and generative metrics.

Vitor Carvalho is a Lead Research Scientist at Snap Research. He is interested in applied research interfacing Machine Learning, Natural Language Processing, Data Mining and Search. He finished his PhD at Carnegie Mellon University working under William W. Cohen. He has worked at Qualcomm Research, Microsoft Bing, Ericsson and can be found @vitroc.

Week 2: Kai-Wei Chang, UCLA

Structured Prediction: Practical Advancements and Applications in Natural Language Processing
Many machine learning problems involve making joint predictions over a set of mutually dependent output variables. The dependencies between output variables can be represented by a structure, such as a sequence, a tree, a clustering of nodes, or a graph. Structured prediction models have been proposed for problems of this type, and they have been shown to be successful in many application areas, such as natural language processing, computer vision, and bioinformatics. In this talk, I will describe a collection of results that improve several aspects of these approaches. Our results lead to efficient learning algorithms for structured prediction models, which, in turn, support reduction in problem size, improvements in training and evaluation speed. I will also discuss potential risks and challenges when using structured prediction models. Related information is on my homepage.

Bio: Kai-Wei Chang is an assistant professor in the Department of Computer Science at the University of California, Los Angeles. He has published broadly in machine learning and natural language processing. His research has mainly focused on designing machine learning methods for handling large and complex data. He has been involved in developing several machine learning libraries, including LIBLINEAR, Vowpal Wabbit, and Illinois-SL. He was an assistant professor at the University of Virginia in 2016-2017. He obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2015 and was a post-doctoral researcher at Microsoft Research in 2016. Kai-Wei was awarded the EMNLP Best Long Paper Award (2017), KDD Best Paper Award (2010), and the Yahoo! Key Scientific Challenges Award (2011). Additional information is available at http://kwchang.net

Week 3: Andrew Kahng, UCSD CSE & ECE

ML Problems Arising in Integrated-Circuit Design
As classic “Moore’s Law” geometric scaling slows, it has fallen upon electronic design automation (EDA) to deliver "design-based equivalent scaling" that helps to continue the Moore’s-Law scaling of semiconductor value.  A powerful lever for this will be the use of machine learning (ML) techniques, both inside and “around” EDA tools.   This talk will give a “lightning round” of open problem formulations for ML that arise in integrated-circuit design. Each of these examples has available data sources / datasets and “motivated customers” (e.g., at EDA companies, semiconductor product companies, and/or foundries). Relevant problem types and ML techniques span classification, active learning, clustering, reinforcement learning, etc. Some background:
semiengineering.com/using-machine- learning-in-eda/

Andrew B. Kahng is Professor of CSE and ECE at UC San Diego, where he holds the endowed chair in High-Performance Computing. He has served as visiting scientist at Cadence (1995- 1997) and as founder/CTO at Blaze DFM (2004-2006).  He is the coauthor of 3 books and over 400 journal and conference papers, holds 33 issued U.S. patents, and is a fellow of ACM and IEEE.  He has served as general chair of DAC, ISQED, ISPD and other conferences. He served as international chair/co- chair of the Design technology working group, and of the System Integration focus team, for the International Technology Roadmap for Semiconductors (ITRS) from 2000-2016. His research interests include IC physical design and performance analysis, the IC design-manufacturing interface, combinatorial algorithms and optimization, and the roadmapping of systems and technology.

Week 4: Oren Etzioni, Allen Institute for Artificial Intelligence

The Future of AI
Given the rapid advances in AI recently, what will the field look like in 5 to 10 years? What are open problems that deep learning and reinforcement learning are not able to solve? And how will AI advances affect our society. My talk will address these questions in a non-technical manner.

Dr. Oren Etzioni is Chief Executive Officer of the Allen Institute for Artificial Intelligence. He has been a Professor at the University of Washington's Computer Science department since 1991, receiving several awards including Seattle's Geek of the Year (2013), the Robert Engelmore Memorial Award (2007), the IJCAI Distinguished Paper Award (2005), AAAI Fellow (2003), and a National Young Investigator Award (1993). He has been the founder or co-founder of several companies including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013). He has written commentary on AI for the New York Times, Nature, Wired, and the MIT Technology Review. He helped to pioneer meta-search (1994), online comparison shopping (1996), machine reading (2006), and Open information Extraction (2007). He has authored of over 100 technical papers that have garnered over 1,800 highly influential citations on Semantic Scholar. He received his Ph.D. from Carnegie Mellon University in 1991, and his B.A. from Harvard in 1986.