Scaling Languages for Probabilistic Reasoning

Apr
15

Scaling Languages for Probabilistic Reasoning

Steven Holtzen, UCLA

3:55 p.m.–4:55 p.m., April 15, 2021   |   Zoom

Contact Ginny Watterson for Zoom link

Machine learning models have grown in size and sophistication, leading to a need for more expressive and accessible modeling languages. To meet this need probabilistic artificial intelligence primitives are increasingly being fused into the foundations of many programming languages. Probabilistic programming languages (PPLs) exemplify this trend: they imbue programming languages with a notion of probabilistic uncertainty, enabling programmers to naturally specify probabilistic models.

Steven Holtzen
Steven Holtzen

The goal of a probabilistic programming language is to perform probabilistic inference: given a probabilistic program, compute the probability that it will output a particular value. Probabilistic inference is computationally hard in general, thus far limiting the applications of PPLs.

In this talk, Steven Holtzen of UCLA will discuss how he has expanded the scope of PPLs by exploiting program structure. First, he describes “dice”, the first PPL that targets discrete probabilistic programs and can scale to tens of thousands of random variables. Holtzen will show that dice’s unique focus enables new and surprising applications, including classical simulation of quantum circuits and probabilistic model checking. Finally, he will explore the key research questions that are necessary to bring PPLs closer to being a tool in the toolbox of every programmer.

Steven Holtzen is a Ph.D. candidate at the University of California, Los Angeles co-advised by Professors Guy Van den Broeck and Todd Millstein. His research focuses on the intersection between machine learning and programming languages. His work has been recognized by an ACM SIGPLAN Distinguished Paper Award, and he has been supported in part by a UCLA Dissertation Year Fellowship.

Contact Ginny Watterson for Zoom link.