CSE 259: UCSD AI Seminar, Spring 2021

Term: Spring Qtr 2021
Time: Monday 12-1pm, Zoom

This seminar is offered to AI graduate students every quarter, by AI faculty.
In this seminar, invited speakers present their research.
Spring 2021 seminars are coordinated by Ndapa Nakashole.

UCSD CSE




Schedule
Week Date Speaker Affiliation
1 March 29 Jean Honorio Assistant Professor, Purdue University
2 April 5 Shashank Srivastava Assistant Professor, Univerity of North Carolina, Chapel Hill
3 April 12 Laura Dietz Assistant Professor, University of New Hampshire
4 April 19
5 April 26
6 May 3
7 May 10
8 May 17
9 May 24
10 May 31


Week 3: Laura Dietz, University of New Hampshire

Retrieve-and-generate: How to Automatically Create Relevant Articles
A lot of progress has been made towards answering specific formalized information needs, such as questions or detailed search queries. However users who familiarize themselves in a new domain would like to read overviews that explain "everything that one needs to know" about a topic, instead of having to ask questions one by one. So far, such users either find an overview article on the web or a wiki, or they are left to piece together this overview on their own. The vision of complex answer retrieval is to develop algorithms that can produce comprehensive overviews for given topics such as "Zika fever", "Green Sea Turtle", or "Reducing air pollution". The success of strong neural models for language generation suggest the feasibility of this idea. However, several tasks such as subtopic detection and story generation need to be addressed before retrieve-and-generate systems will provide information-rich, relevant, and useful overviews. This talk gives an overview of the research advances resulting from TREC Complex Answer Retrieval and the years since. More information about the TREC CAR and the datasets are available at http://trec-car.cs.unh.edu

Bio:
Laura Dietz is an Assistant Professor at the University of New Hampshire, where she leads the lab for text retrieval, extraction, machine learning and analytics (TREMA). She organizes a tutorial/workshop series on Utilizing Knowledge Graphs in Text-centric Retrieval (KG4IR) and coordinates the TREC Complex Answer Retrieval Track. She received an NSF CAREER Award for utilizing fine-grained knowledge annotations in text understanding and retrieval. Previously, she was a research scientist at the Data and Web Science Group at Mannheim University and the Center for Intelligent Information Retrieval (CIIR) at UMass Amherst. She obtained her doctoral degree with a thesis on topic models for networked data from Max Planck Institute for Informatics. More Info: https://www.cs.unh.edu/~dietz



Week 2: Shashank Srivastava, University of North Carolina (UNC) Chapel Hill

Few-shot Learning with Interactive Language
Today machine learning is largely about function approximation from large volumes of labeled data. However, humans learn through multiple mechanisms in addition to inductive inference. In particular, we can efficiently learn and communicate new knowledge about the world through natural language and our educational systems rely on learning processes that are deeply intertwined with language, e.g., reading books, listening to lectures, engaging in student-teacher dialogs. In this talk, we will explore some recent work on building automated learning systems that can learn new tasks through natural language interactions with their users in scenarios with limited labeled data. We will cover multiple scenarios to demonstrate this idea: learning web-based tasks from descriptions and demonstrations; using language to communicate domain knowledge for reasoning tasks; and leveraging natural language patterns in conjunction with large language models for few-shot learning.

Bio:
Shashank Srivastava is an assistant professor in the Computer Science department at the University of North Carolina (UNC) Chapel Hill. Shashank received his PhD from the Machine Learning department at CMU in 2018, and was an AI Resident at Microsoft Research in 2018-19. Shashank's research interests lie in conversational AI, interactive machine learning and grounded language understanding. Shashank has an undergraduate degree in Computer Science from IIT Kanpur, and a Master’s degree in Language Technologies from CMU. He received the Yahoo InMind Fellowship for 2016-17. His research has been covered by popular media outlets including GeekWire and New Scientist.



Week 1: Jean Honorio, Purdue University

Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem
We study the problem of fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute. The presence of a hidden attribute adds an extra layer of complexity to the problem by combining sparse regression and clustering with unknown binary labels. The corresponding optimization problem is combinatorial but we propose a continuous relaxation, resulting in an invex optimization problem. To the best of our knowledge, this is the first invex relaxation for a combinatorial problem. We show that our method recovers the correct support of the regression parameter vector, as well as the exact value of the hidden attribute for each sample. The above theoretical guarantees hold as long as the number of samples is logarithmic in terms of the dimension of the regression parameter vector. The result above serves as a gentle introduction to a unifying framework, which uses the power of continuous relaxations (beyond convexity), Karush-Kuhn-Tucker conditions, primal-dual certificates and concentration inequalities. This framework has allowed us to produce novel algorithms for several NP-hard combinatorial problems, such as learning Bayesian networks, graphical games, inference in structured prediction, and community detection.

Bio:
Jean Honorio is an Assistant Professor in the Computer Science Department at Purdue University, as well as in the Statistics Department (by courtesy). Prior to joining Purdue, Jean was a postdoctoral associate at MIT, working with Tommi Jaakkola. His Erdős number is 3. His work has been partially funded by NSF. He is an editorial board reviewer of JMLR, and has served as senior PC member of IJCAI and AAAI, PC member of NeurIPS, ICML, AISTATS among other conferences and journals.