Photo cred to Cynthia Guo

Max Hopkins

EBU-3B
La Jolla, CA, 92092
Office 4232

nmhopkin @ ucsd dot edu

About

I am a PhD student in the theory group at UCSD where I am thankful to be advised by Daniel Kane and Shachar Lovett. I did my undergraduate BA in Mathematics at Harvard University with a minor in Computer Science. In my time there, I was lucky enough to work under Michael Mitzenmacher and Madhu Sudan. My research is generously supported by NSF GRFP, ARCS, and JSOE fellowships.

I am grateful to have spent '22-'23 as a visiting student at the Weizmann institute with Irit Dinur, and will spend '24-'27 as a postdoc at Princeton and the Institute for Advanced Study with Pravesh Kothari and Avi Wigderson respectively.

If you are looking for UCSD Theory Lunch, it has been passed to the next generation.

I like singing, boardgames, squash, and sushi (in no particular order).

Interests

I am broadly interested in areas such as boolean function analysis, hardness of approximation, learning theory, privacy, and computational geometry. Currently, I mostly think about:

  1. High Dimensional Expanders
  2. High Dimensional Expansion (HDX) is an emerging area with applications across topics such as sampling, error correction, approximation algorithms, and more. I study analysis of boolean functions and random walks on HDX, especially in the context of hardness of approximation. In Spring 2021, Shachar Lovett and I co-taught an introductory course on expanders and HDX. Detailed course notes can be found here, and corresponding lectures here.

  3. Learning beyond Uniform Convergence
  4. Many classical results in learning theory rely on the idea of uniform convergence, that empirical risk should converge to true risk across all hypotheses simultaneously, but the paradigm fails to capture rates and other phenomena observed in practice. I study statistical learning beyond uniform convergence, including basic settings such as learning with data-dependent assumptions which remain largely enigmatic at a theoretical level.

  5. Active Learning through Enriched Queries
  6. When does adaptivity help us learn? In standard models, it is well known that adaptivity often provides no asymptotic benefit. I study how asking better questions allows us to build adaptive algorithms that drastically speed up learning. If you're interested in this area, check out this blog post for a basic primer.

Papers

Author order is alphabetical (unless otherwise stated)

Preprints

Publications

Blog Posts

Miscellaneous Writing

Talks

Upcoming Talks
  • Ben-Gurion University (March 27, April 16)

Invited Talks
  • On High Dimensional Expanders and Hardness of Approximation (IISC Bangalore)

  • Hypercontractivity on High Dimensional Expanders (STOC '22, TTIC, UMich Theory Seminar, MIT A&C Seminar, Weizmann Guest Lecture)

  • Realizable Learning is All You Need (COLT '22, UWaterloo, GaTech Student Seminars, Stanford Theory Lunch)

  • High Dimensional Expanders and Unique Games (SODA '22, UW Seminar, SoS+TCS Seminar, Cornell Theory Tea, Chicago/TTIC (Reading Group))

  • Active Learning in Bounded Memory (COLT '21)

  • Point Location and Active Learning (FOCS '20, Harvard CMSA)

  • Noise Tolerant Active Learning with Comparisons (COLT '20)

UCSD Theory Seminar/Lunch
  • On the Equivalence of Realizable and Agnostic Learning

  • Active Learning with Bounded Memory

  • Point Location and Active Learning

  • An Open Problem in Noisy Sorting

  • Small Set Expansion in the Johnson Graph through High Dimensional Expansion

  • High Dimensional Expanders: The Basics

  • Reasoning in the Presence of Noise

  • The Power of Comparisons for Active Learning

Undergraduate Talks
  • On the Cohomology of Dihedral Groups

  • Understanding Doppelgangers and the Ur-Operation

Teaching

At UCSD
  1. Co-taught CSE 291, Expander graphs and high-dimensional expanders, Spring 2021.

  2. TA for CSE 200, Computability and Complexity, Winter 2020.

At Harvard
  1. TA for APMTH 106, Applied Algebra, Fall 2017.