Research

I am broadly interested in uncertainty quantification. To this end, I have worked on computational methods for accelerating Bayesian inference in simulator based models, and locally adaptive conformal prediction methods for deep learning models.

Adaptive Uncertainty Quantification for Generative AI

Adaptive Uncertainty Quantification for Generative AI

Modern deep learning paradigms function as a black-box and do not expose their training data to the end user. We develop a novel framework for conformal prediction that adapts based on the calibration data directly, circumventing the need for further data splitting or access of the training data.

Learn more →
Tree Bandits for Generative Bayes

Tree Bandits for Generative Bayes

We develop a self-aware framework for likelihood-free Bayesian inference that learns from past trials and errors. We apply recursive partitioning classifiers on the ABC lookup table to sequentially refine high-likelihood regions into boxes, each of which is regarded as an arm in a binary bandit problem treating ABC acceptance as a reward.

Learn more →
Measurement in the Age of LLMs

Measurement in the Age of LLMs

Much of social science is centered around terms like “ideology” or “power”, which generally elude precise definition, and whose contextual meanings are trapped in surrounding language. This paper explores the use of large language models (LLMs) to flexibly navigate the conceptual clutter inherent to social scientific measurement tasks. We elicit ideological scales of both legislators and text, which accord closely to established methods and our own judgement.

Learn more →