Check out a preprint of my latest work "Adaptive Uncertainty Quantification for Generative AI" with Jungeum Kim and Veronika Ročková, where we obtain locally adaptive conformal prediction sets in settings where training data may not be available, like modern large-scale generative models.
I’m a fourth year PhD student in the department of statistics at the University of Chicago working with Veronika Ročková and Aaron Schein. I’m interested in Bayesian computation, Bayesian optimization & adaptive experimental design, and natural language processing, particularly in applications to political and social science.
Latest
I gave an oral presentation at IC2S2 in Philadelphia on representations of ideology in large language models
Posted a preprint with Jungeum Kim and Veronika Ročková on accelerating approximate Bayesian computation using a tree-based partitioning
My work with Aaron Schein was presented at the NeurIPS workshop "I Can't Believe it's not Better" in New Orleans, LA. Check out a preprint here
Presented my work on eliciting ideological scales from large language models at the New Directions in Text as Data conference (TADA) in Amherst, MA