
I'm a fourth year PhD candidate in computer science at UC Berkeley, where I'm advised by Michael I. Jordan and Nika Haghtalab. I'm a Google PhD Fellow, a NSF Graduate Research Fellow, and affiliated with the Berkeley AI Research Lab (BAIR).
My current research focuses on applying large language models to reasoning problems in low-resource domains where verification is difficult, such as solving open problems in niche theoretical topics. I'm also more generally interested in developing conceptual foundations for multi-domain intelligence; my theoretical research analyzes mathematical frameworks for provable multi-domain machine learning, including multi-distribution learning and calibrated forecasting.
I received my B.S. from Caltech in 2020 and previously interned with Google Research, Nvidia Research, and Salesforce Research.
Selected Works
(α-β) denotes when authors are ordered alphabetically.
- Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification
- Algorithmic Content Selection and the Impact of User Disengagement
-
A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning
PDF NeurIPS 2023
- On-Demand Sampling: Learning Optimally from Multiple Distributions
Selected Awards
- Google PhD Fellowship (2024)
- NSF Graduate Research Fellowship (2023)
- NeurIPS Best Paper Award (2022)