Who am I?

My name is Tzu-Mao Li. I am an assistant professor in UC San Diego's Computer Science & Engineering department. Check out my webpage if you haven't visited it yet.

For prospective students/postdocs

I am not actively recruiting these days. However, the following tips should still be useful for general graduate school admission.
If you want to work with me as a graduate student, please apply through the UCSD CSE department website, and select me as one of your potential advisors in the application form (otherwise, there is little chance I will notice your application). If you are interested in a postdoc position, please send me an email (tzli@ucsd.edu). You don't have to send me an email if you apply to UCSD CSE as a graduate student, unless you have some specific questions. I am happy to answer questions regarding the application and research.
I would like to see the following in your application materials: Be concrete and clear. Give examples. Answer the "Why" and "What" of the questions. Ideally, you want to show a non-trivial, mature view of research fields instead of just throwing buzzwords. Ideally, you want to have a concrete proposal of a research direction, but it is fine if it is vague (you don't want me to steal your idea anyway). I do not expect you to answer these questions perfectly. I probably can't answer them perfectly myself either. Try your best.

I do not expect students to have prior experience with computer graphics or programming languages (it is a plus if you have some). However, make sure you are enthusiastic about at least a subset of them (for example, the topics I list below). I do expect students to have solid mathematics skills and/or software engineering skills. Ideally, the students should have some expertise that I don't have, so we can learn from each other.

My expectations of students

What's my research?

Check out my job talk slides for a general overview.
Check out my webpage for publications and notes.

I work on the interactions between computer graphics, vision, programming systems, and machine learning. My main research direction is something I call "differentiable visual computing," where we go beyond neural networks and backpropagate through computer graphics programs. In graphics, we explicitly model how the world behaves (often through physics), instead of relying on generic neural networks to learn everything from scratch. Learning from data by backpropagating through graphics programs allows us to have more control and a better understanding of our programs' behavior, enables high-performance, and makes debugging/verification easier. The applications include: helping self-driving cars to make better decisions, training robots to interact with the environment using physical information, creating more realistic virtual realities, designing buildings and rooms to have better lighting, designing 3D physical objects with desired appearance and functionalities, reconstructing 3D structures of cells from microscope images, and allowing movie artists to produce better film shots.

Differentiating general graphics programs correctly and efficiently is much more complicated than differentiating a convolution layer, due to the general computation patterns and the mathematics involved. I derive algorithms that differentiate graphics programs while taking discontinuities into account. I design compilers that explore trade-offs of derivative computation and generate correct and efficient code. I look at applications where these differentiable graphics programs can be useful.

Many graphics researchers are trying to replace graphics pipelines with deep learning components. While this is a cool direction and is in my research scope, one of my main directions is to introduce differentiable graphics programs to deep learning pipelines. I claim that this is the way that will lead us to controllable, interpretable, robust, and efficient models. I am also highly interested in non-data-driven settings, where we try to figure out latent variables solely based on our knowledge of the physical process.

Following are some of my current research thrusts:

Differentiable rendering: How do we backpropagate through the rendering equation, so that we can infer 3D information from 2D observations? How do we handle discontinuities, and how do we make the differentiation as efficient as possible? How do we differentiate rendering with arbitrary geometry and material representations, while modeling physical phenomena such as occlusion, surface, and volumetric scattering, dispersion, and diffraction? Ultimately, we want to build a differentiable renderer that can generate and differentiate noise-free million-pixel-images with billions of varied primitives within seconds, while accurately modeling optics. We want to use the renderer for artificial intelligence agents to inference 3D information, for reconstructing detailed 3D models for virtual/augmented reality, for designing and fabricating real-world 3D objects with desired optical properties, for designing imaging systems, and for analyzing biomedical data using inverse optics.
related work of mine: [Li2018a] [Azinovic2019] [Zhao2020] [Bangaru2020] [Li2020] [Bangaru2021] [Chandra2022] [Bangaru2022] [Chang2023]

Differentiable image processing: How do we design image processing pipelines that can learn from data, can process images with tens or hundreds of megapixels, in real-time, on mobile devices? I claim that instead of stacking more convolution layers and increasing overparametrization, we should generalize the programming model we use for designing image processing algorithms. Instead of the high arithmetic intensity deep learning layers, we want to take building blocks from more traditional image processing algorithms, parametrize them, and differentiate through them to learn the parameters. To achieve this, we need better compilers that can fuse the low arithmetic intensity computation, and can differentiate array code.
related work of mine: [Li2018b] [Adams2019] [Li2020] [Bernstein2020] [Ma2022]

Differentiable physics simulation: How do we backpropagate through ODE/PDE solvers to make inference based on the dynamics? This problem is studied in the optimal control and the sensitivity analysis literature (and this is also what inspired the neural ODE work). However, differentiation of discontinuities and boundary conditions in ODEs/PDEs is not very well-understood. Furthermore, how do we efficiently map these computations to modern hardware? How do we reduce the memory usage when backpropagating an iterative solver? Answering these questions will enable us to train robot controllers orders of magnitude more efficiently, design 3D objects with physical constraints, or even have more elaborated epidemiology models.
related work of mine: [Hu2019] [Hu2020] [Bangaru2021]

Domain-specific compilers for differentiable computer graphics: How do we design compilers that take high-level computer graphics code (rendering, image processing, simulation, geometry processing), and automatically output high-performance low-level code along with the derivative code? How do we certify the correctness? Just like how deep learning frameworks democratize machine learning, I want to build an easy-to-use programmable differentiable graphics system that makes building differentiable graphics pipelines as simple as training an MNIST classifier in PyTorch, while generating reliable code.
related work of mine: [Anderson2017] [Li2018b] [Adams2019] [Hu2019] [Hu2020] [Bernstein2020] [Bangaru2021] [Anderson2021]

Accelerating physically-based rendering: Physically-based rendering is known to be time-consuming, due to its need to compute multi-dimensional integrals using Monte Carlo sampling. How do we make it faster? I believe there are two keys towards the ultimate rendering algorithm: 1) re-using Monte Carlo samples through statistical analysis. 2) replacing heuristics with data-driven components.
related work of mine: [Li2012] [Wu2015] [Li2015] [Anderson2017] [Gharbi2019] [Wang2021]

Beyond differentiation: Can we derive or automatically find *something* that approximates a function locally, and better inform our inference algorithms, when comparing to derivatives? For example, Fourier analysis or wavelet analysis gives us strictly more information than derivatives (differentiation is a linear ramp in the frequency domain). How can we use them in optimization algorithms? Can we find something that scales better with dimensionality compared to Fourier analysis? Can we develop systems that help us find these quantities given a program?
related work of mine: Nothing here yet. Hopefully your work will be listed here!