EBU-3B
La Jolla, CA, 92092
Office 4232
nmhopkin @ eng dot ucsd dot edu
I am a PhD student in the theory group at UCSD where I am thankful to be advised by Daniel Kane and Shachar Lovett. I did my undergraduate BA in Mathematics at Harvard University with a minor in Computer Science. In my time there, I was lucky enough to work under Michael Mitzenmacher and Madhu Sudan. My research is generously supported by NSF GRFP, ARCS, and JSOE fellowships.
I am currently a visiting student at the Weizmann Institute (Ziskind 205).
If you are looking for UCSD Theory Lunch, it has been passed to the next generation.
I like singing, boardgames, squash, and sushi (in no particular order).
I am broadly interested in areas such as learning theory, computational geometry, combinatorics, approximation algorithms, and hardness of approximation. Currently, I mostly think about:
High Dimensional Expansion (HDX) is an emerging area with applications across topics such as sampling, error correction, approximation algorithms, and more. I study analysis of boolean functions and random walks on HDX, especially in the context of hardness of approximation. In Spring 2021, Shachar Lovett and I co-taught an introductory course on expanders and HDX. Detailed course notes can be found here, and corresponding lectures here.
Many classical results in learning theory rely on the idea of uniform convergence, that empirical risk should converge to true risk across all hypotheses simultaneously. Unfortunately, this paradigm fails in more practical settings beyond basic worst-case analysis. I am interested in studying learnability in more realistic settings like learning with arbitrary distributional assumptions where these traditional techniques fail.
When does adaptivity help us learn? In standard models, it is well known that adaptivity often provides no asymptotic benefit. I study how asking better questions allows us to build adaptive algorithms that drastically speed up learning. If you're interested in this area, check out this blog post for a basic primer.
Sampling Equilibria: Fast No-Regret Learning in Structured Games
Daniel Beaglehole, Max Hopkins, Daniel Kane, Sihan Liu, Shachar Lovett
In Submission
Active Learning Polynomial Threshold Functions
Omri Ben-Eliezer, Max Hopkins, Chutong Yang, Hantao Yu
NeurIPS 2022
Explicit Lower Bounds Against Ω(n)-Rounds of Sum-of-Squares
Max Hopkins, Ting-Chun Lin
FOCS 2022
Eigenstripping, Spectral Decay, and Edge-Expansion on Posets
Jason Gaitonde, Max Hopkins, Tali Kaufman, Shachar Lovett, Ruizhe Zhang
RANDOM 2022 (Talk)
Realizable Learning is All You Need
Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
COLT 2022 (Long Talk, Short Talk)
Hypercontractivity on High Dimensional Expanders: A Local-to-Global Method for Higher Moments
Mitali Bafna, Max Hopkins, Tali Kaufman, Shachar Lovett
STOC 2022 (Talk)
High Dimensional Expanders: Eigenstripping, Pseudorandomness, and Unique Games
Mitali Bafna, Max Hopkins, Tali Kaufman, Shachar Lovett
SODA 2022
Active Learning with Bounded Memory through Enriched Queries
Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz
COLT 2021 (Talk)
The Power of Comparisons for Actively Learning Linear Separators
Max Hopkins, Daniel Kane, Shachar Lovett
NeurIPS 2020
Point Location and Active Learning: Learning Halfspaces Almost Optimally
Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
FOCS 2020 (Long Talk, Short Talk)
Noise Tolerant, Reliable Active Classification with Comparison Queries
Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
COLT 2020 (Talk)
Doppelgangers: the Ur-Operation and Posets of Bounded Height
Thomas Browning, Max Hopkins, Zander Kelley
FPSAC 2018 (Extended Abstract)
Simulated Annealing for JPEG Quantization
Max Hopkins, Michael Mitzenmacher, Sebastian Wagner-Carena
DCC 2018 (poster)
Code
A Novel CMB Component Separation Method: Hierarchical Generalized Morphological Component Analysis
Sebastian Wagner-Carena, Max Hopkins, Ana Rivero, Cora Dvorkin
MNRAS, May 2020
Code
How to Actively Learn in Bounded Memory
A basic exposition of new work towards characterizing bounded memory active learning with a few concrete examples.
The Power of Comparison in Active Learning
An introduction to active learning with comparison queries covering seminal work of KLMZ and recent extensions.
Notes on Expanders and High Dimensional Expansion
Lecture notes from my course with Shachar Lovett on Expanders and HDX
A Few Tips for Getting into CS Theory Research as an Undergraduate
A couple slides with tricks for entering theory research and links to Summer program opportunities
Representation-Theoretic Techniques for Independence Bounds of Cayley Graphs
My undergraduate thesis under Madhu Sudan
Bordism Homology
A survey of un-oriented bordism homology
On High Dimensional Expanders and Hardness of Approximation (IISC Bangalore)
Hypercontractivity on High Dimensional Expanders (STOC '22, TTIC, UMich Theory Seminar, MIT A&C Seminar)
Realizable Learning is All You Need (COLT '22, UWaterloo, GaTech Student Seminars, Stanford Theory Lunch)
High Dimensional Expanders and Unique Games (SODA '22, UW Seminar, SoS+TCS Seminar, Cornell Theory Tea, Chicago/TTIC (Reading Group))
Active Learning in Bounded Memory (COLT '21)
Point Location and Active Learning (FOCS '20, Harvard CMSA)
Noise Tolerant Active Learning with Comparisons (COLT '20)
On the Equivalence of Realizable and Agnostic Learning
Active Learning with Bounded Memory
Point Location and Active Learning
An Open Problem in Noisy Sorting
Small Set Expansion in the Johnson Graph through High Dimensional Expansion
High Dimensional Expanders: The Basics
Reasoning in the Presence of Noise
The Power of Comparisons for Active Learning
On the Cohomology of Dihedral Groups
Understanding Doppelgangers and the Ur-Operation
Co-taught CSE 291, Expander graphs and high-dimensional expanders, Spring 2021.
TA for CSE 200, Computability and Complexity, Winter 2020.
TA for APMTH 106, Applied Algebra, Fall 2017.