We posted a preprint for our new method IQ-BART, which employs a forest prior over the conditional quantile function. This allows for uncertainty quantification about a nonparametric conditional model.
I’m a fourth year PhD student in the department of statistics at the University of Chicago working with Veronika Ročková.
My research aims to reconcile the inferential rigor of Bayesian statistics with the predictive power and scalability of modern machine learning systems. My work treats the relationship between these fields as symbiotic and transfers ideas in both directions. This involves exporting statistical methods for uncertainty quantification into large-scale predictive models, while also importing the capabilities of these predictive systems to elicit informative priors and accelerate Bayesian computation.
Latest
I'll be at BNP14, presenting about accelerating likelihood-free inference with a tree-based reinforcement learning method.
Our new preprint on AI-Powered Bayesian Inference is now available, in which we develop a framework to leverage AI systems to elicit informative priors for Bayesian statistical analyses.
Check out a preprint of my latest work "Adaptive Uncertainty Quantification for Generative AI" with Jungeum Kim and Veronika Ročková, where we obtain locally adaptive conformal prediction sets in settings where training data may not be available, like modern large-scale generative models.
I gave an oral presentation at IC2S2 in Philadelphia on representations of ideology in large language models.
I posted a preprint with Jungeum Kim and Veronika Ročková on accelerating approximate Bayesian computation using a tree-based partitioning.
My work with Aaron Schein was presented at the NeurIPS workshop "I Can't Believe it's not Better" in New Orleans, LA. Check out a preprint here.
I presented my work on eliciting ideological scales from large language models at the New Directions in Text as Data conference (TADA) in Amherst, MA.