Title: Surrogate Loss Functions in Machine Learning: What are the Fundamental Design Principles? Speaker: Shivani Agarwal (UPenn) Abstract: Surrogate loss functions are widely used in machine learning. In particular, for many machine learning problems, the ideal objective or loss function is computationally hard to optimize, and therefore one instead works with a (usually convex) surrogate loss that can be optimized efficiently. What are the fundamental design principles for such surrogate losses, and what are the associated statistical behaviors of the resulting algorithms? This talk will provide answers to some of these questions. In particular, we will discuss the theory of convex calibrated surrogate losses, which yield statistically consistent learning algorithms for the true learning objective, and will provide fundamental principles and tools for designing such surrogate losses for a wide variety of machine learning problems. Our surrogate losses effectively decompose complex multiclass and multi-label learning problems into simpler binary learning problems, and come with corresponding decoding schemes that make the overall learning approach statistically consistent. We will also discuss the tool of strongly proper losses, which act as a fundamental primitive in deriving statistical guarantees for various learning problems, and connections with the field of property elicitation and with PAC learning. We will conclude with some open questions. Bio: Shivani Agarwal is Rachleff Family Associate Professor of Computer and Information Science at the University of Pennsylvania, where she also directs the NSF-sponsored Penn Institute for Foundations of Data Science (PIFODS) and co-directs the Penn Research in Machine Learning (PRiML) center. She has previously been a Radcliffe Fellow at Harvard and has taught as a Ramanujan Fellow at the Indian Institute of Science and as a postdoctoral lecturer at MIT. She serves as an Action Editor for the Journal of Machine Learning Research and an Associate Editor for the Harvard Data Science Review, and served as Program Co-chair for COLT 2020. Her research interests include computational, mathematical, and statistical foundations of machine learning and data science; applications of machine learning in the life sciences and beyond; and connections between machine learning and other disciplines such as economics, operations research, and psychology. Her group's research has been selected four times for spotlight presentations at the NeurIPS conference.