직함: Assistant Professor
Arizona State University
As automated decision-making systems are increasingly deployed in areas with personal and societal impacts, there is a growing demand for artificial intelligence and machine learning systems that are fair, robust, interpretable, and generally trustworthy. This raises the need for a unified language and framework under which we can reason about and develop trustworthy AI systems. In this talk, I will discuss how tractable probabilistic reasoning and learning provides such a framework. Taking fairness-aware learning as an example, I will show how we can deal with biased labels in the training data by explicitly modeling the observed labels as being generated from some probabilistic process that injects bias/noise into hidden, fair labels, particularly in a way that best explains the observed data. The inferred fair labels can then be used to audit the fairness of classifiers better. In addition, I will also discuss recent breakthroughs in tractable inference of more complex queries such as information-theoretic quantities, to demonstrate the potential of probabilistic reasoning for trustworthy AI.
YooJung Choi is an assistant professor in the School of Computing and Augmented Intelligence at Arizona State University. Her research interests lie in probabilistic machine learning, tractable probabilistic modeling and inference, knowledge representation and reasoning, as well as trustworthy artificial intelligence (algorithmic fairness, robustness, and interpretability). Previously, she received her Ph.D. in Computer Science at the University of California, Los Angeles. She is the recipient of a research award from Cisco and a Simons-Berkeley Research Fellowship. She was selected for the AAAI 2023 New Faculty Highlights and the Rising Stars in EECS 2020 at UC Berkeley.