CSE 259 is a seminar course in which faculty, students, and visitors will present their recent and ongoing research, including works in progress.
Classes will take place on Mondays from 12:00-13:00 in the Computer Science Building, EBU-3b 4140.
There will ostensibly be pizza.
August 2016 seminars are organized by Julian McAuley.
|Week 1||September 26||UCSD PhD Students||UCSD, CSE|
|Week 2||October 3||Fragkiskos Malliaros||UCSD, CSE|
|Week 3||October 10||Zachary Lipton||UCSD, CSE|
|Week 4||October 17||Arun Kumar||UCSD, CSE|
|Week 5||October 24||Amin Karbasi||Yale|
|Week 6||October 31||Risi Kondor||UChicago, IPAM|
|Week 7||November 7||Laurel Riek||UCSD, CSE|
|Week 8||November 14||Yizhou Sun||UCLA|
|Week 9||November 21||Piya Pal||UCSD, ECE|
|Week 10||November 28||Sameer Singh||UC Irvine|
Graduate students who interned in 2016 will discuss their (non-NDA-blocked) research. Speakers include Olivia Simpson, Mengting Wan, Sharad Vikram, and Julaiti Alafate
Networks (or graphs) have become ubiquitous as data from diverse disciplines can naturally be mapped to graph structures. In this talk, I will present part of my work that generally focuses on how to: (i) design models for analyzing the structure and dynamics of real-world networks towards unraveling properties that can further be used in practical applications; (ii) develop algorithmic tools for large-scale analytics on data with inherent (e.g., social networks) or without inherent (e.g., text) graph structure. In particular, in the first part of the talk I will discuss the problem of spectral robustness estimation in large social graphs and its application to anomaly detection. In the second part, I will focus on the problem of information spreading and more precisely on how to identify influential spreaders in complex networks using degeneracy-based graph mining methods. Finally, considering an analogy between influential users in social networks and influential words in text, I will also briefly discuss how the concept of graph degeneracy can also be applied in the domain of text analytics and in particular in the problem of keyword selection.
When rewards are sparse and action spaces large, Q-learning with ϵ-greedy exploration can be inefficient. This poses problems for otherwise promising applications such as task-oriented dialogue systems, where the primary reward signal, indicating successful completion of a task, requires a complex sequence of appropriate actions. Under these circumstances, a randomly exploring agent might never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialogue systems. First, we introduce an exploration technique based on Thompson sampling, drawing Monte Carlo samples from a Bayes-by-backprop neural network, demonstrating marked improvement over common approaches such as ϵ-greedy and Boltzmann exploration. Second, we show that spiking the replay buffer with experiences from a small number of successful episodes, as are easy to harvest for dialogue tasks, can make Q-learning feasible when it might otherwise fail
Analysis of large and complex datasets using machine learning (ML) techniques, an area popularly known as "advanced analytics," has become central to numerous data-driven applications in domains ranging from Web companies and enterprises to physical sciences and healthcare. With its roots in augmenting relational databases with data mining algorithms in the early 2000s, advanced analytics has emerged as an active and impactful field of research bringing together multiple communities: data management, systems, ML, and human-computer interaction. In this talk, I will give an overview of the field of advanced analytics and highlight they key research problems addressed by academia and industry. I will then dive into the research problems in this space being tackled by my research group. Specifically, I will talk about four ongoing and future projects: (1) Exploiting classical database dependencies to improve the efficiency of ML and short-circuit feature selection, (2) Integrating linear algebra and relational algebra to closely integrate feature engineering and other data transformations with ML, (3) Building a fine-grained pricing framework for ML tasks over relational data traded in cloud data marketplaces, and (4) Building a system to support the end-to-end process of model engineering and inference for deep neural networks for relational data analytics.
For the last several years, we have witnessed the emergence of datasets of an unprecedented scale across different scientific disciplines. The large volume of such datasets presents new computational challenges as the diverse, feature-rich, and usually high-resolution data does not allow for effective data-intensive inference. In this regard, data summarization is a compelling (and sometimes the only) approach that aims at both exploiting the richness of large-scale data and being computationally tractable; Instead of operating on complex and large data directly, carefully constructed summaries not only enable the execution of various data analytics tasks but also improve their efficiency and scalability.
A systematic way for data summarization is to turn the problem into selecting a subset of data elements optimizing a utility function that quantifies “representativeness” of the selected set. Often-times, these objective functions satisfy submodularity, an intuitive notion of diminishing returns stating that selecting any given element earlier helps more than selecting it later. Thus, many problems in data summarization require maximizing submodular set functions subject to cardinality and massive data means we have to solve these problems at scale.
In this talk, I will present our recent efforts in developing practical schemes for data summarization. In particular, I will first discuss Stochastic-Greedy, currently the fastest centralized algorithm for data summarization whose query complexity is only linear in data size. However, to truly summarize data at scale we need to opt for streaming or distributed solutions. To this end, I will then present Sieve-Streaming, the first streaming algorithm for data summarization with a constant-factor approximation guarantee. Finally, I will talk about GreeDi, the first distributed approach towards data summarization. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar-based clustering, on tens of millions of data points using Apache-Spark
The sheer size of today's datasets dictates that learning algorithms compress or reduce their input data and/or make use of parallelism. Multiresolution Matrix Factorization (MMF) makes a connection between such computational strategies and some classical themes in Applied Mathematics, namely Multiresolution Analysis and Multigrid Methods. In particular, the similarity matrices appearing in data often have multiresolution structure, which can be exploited both for learning and to facilitate computation.
In addition to the general MMF framework, I will present our new, high performance parallel MMF software library and show some results on matrix compression/sketching tasks (joint work with Nedelina Teneva, Pramod Mudrakarta, Yi Ding and Vikas Garg).
The Robotics and Healthcare Engineering Lab at UCSD works to enable robots to sense, learn, and adapt to real people in the real world. We work on creating new algorithms for robots that leverage coordination dynamics and contextual processing to enable them to be robust and independent in the long term. Applications of this work include neurorehabilitation, advanced manufacturing, and home health. In this talk I will describe some of our lab's recent work.
One of the challenges in mining information networks is the lack of intrinsic metric in representing nodes into a low dimensional space, which is essential in many mining tasks, such as anomaly detection, recommendation, and link prediction. Moreover, when coming to heterogeneous information networks, where nodes belong to different types and links represent different semantic meanings, it is even more challenging to represent nodes properly for a particular task. In this talk, I will introduce our recent progress of network embedding approaches that are designed for heterogeneous information networks and guided by specific tasks, and discuss (1) how to represent nodes when different types of nodes and links are involved; (2) how different tasks can guide the embedding process; and (3) how heterogeneous links play different roles in these tasks. Our results on several application domains, including enterprise networking, social network, bibliographic data, and biomedical data, have demonstrated the superiority as well as the interpretability of these new methodologies.
A number of problems in statistical information processing and data analysis model the data as a Wide-Sense Stationary (WSS) time series, whose power spectrum (or equivalently, the covariance matrix) often acts as a sufficient statistic. Popular examples include speech processing, source localization in radar and neural signal processing, wireless communication and so forth. The covariance matrix of such data exhibit a Toeplitz structure, which can be leveraged to design highly efficient compressive samplers to reduce the dimension of high dimensional WSS data, without requiring it to have a sparse representation. Unlike existing results in Compressed Sensing, the goal here is to recover the high dimensional covariance matrix (or infer parameters of interest from it), instead of reconstructing the data itself.
Inspired by our past work on nested sensor arrays, I will describe a new WSS data sketching technique, known as the "Generalized Nested Sampler" (GNS), to acquire compressive measurements in such a way that it becomes possible to perfectly reconstruct the original high dimensional covariance matrix from these compressed sketches. I will focus on low-rank Toeplitz covariance matrices and develop an efficient GNS-based sampler which allows the recovery of a rank-r Toeplitz covariance matrix from a compressed sketch of size O(√r) x O(√r), where the size of the sketch has no dependence on the ambient large dimension N. Our reconstruction technique will use a regularizer-free framework, combined with the ability to extrapolate additional “unobserved” entries of the NxN covariance matrix. The algorithm has significantly lower computational complexity (that does not scale with N) compared to recent nuclear-norm based compressive covariance estimators, and is provably robust against bounded errors. Finally, I will consider the special case of rank-1 and sparse covariance matrices that arise in the important problem of “phase retrieval” in optical imaging. The role of 2nd order difference sets in complex phase retrieval will be demonstrated, inspiring the design of a new class of non-uniform Fourier based sampler, that can provably recover a complex signal (upto a global phase ambiguity) from its amplitude measurements with near-minimal number of samples. An interesting connection with the 4N-4 conjecture will also be established, which hypothesis 4N-4 to be the minimum number of generic measurements necessary to ensure injectivity in N dimensions for complex phase retrieval.
Joint work with Heng Qiao.