XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships. XAI allows to deploy opaque data-driven AI methods safely in a variety of settings, such as finance, healthcare and law.
XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (in particular psychology) and applications.
Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.
As AI systems increasingly permeate our daily lives, the need for ensuring their decision-making processes are explainable has become paramount. In the first part of the talk, I will argue that explanation generation frameworks based on formal logic can serve as an explainability layer atop AI systems, capable of generating rigorous, flexible, and human-aware explanations across diverse domains. Particularly, I will present a general logic-based framework that models the explanation problem as one of reconciling the knowledge bases of the AI system and the human user when discrepancies between the system's decision and the user's expectations exist. I will show how this framework can be augmented and extended to handle various settings, from single-shot explanations to dialectical explanations. In the second part of the talk, I will discuss my recent work on developing synergistic AI systems that combine the soundness of logic and the human-like language generation capabilities of Large Language Models (LLMs). Specifically, I will showcase a prototype of an end-to-end explainable system for explaining coursework planning to WashU CSE students. This system treats LLMs as "linguistic user interfaces" for facilitating natural and conversational interactions with human users, while employing a logic-based explanation generation framework for generating sound explanations. Such hybrid systems have the advantage of ensuring that their outputs are not only provably correct and robust, but also tailored to the human user's preferences and level of understanding.
Stylianos is a PhD candidate at the Department of Computer Science and Engineering at Washington University in St. Louis, supervised by the amazing William Yeoh. His current research interest is centered on developing human-aware decision-making frameworks for enabling and enhancing human-AI collaboration. Being this an inherently interdisciplinary subject, it requires drawing ideas from a confluence of approaches such as knowledge representation & reasoning, planning, machine learning, as well as philosophy, cognitive science, and psychology, which is what he has been exploring in his PhD. Within a preceding historical epoch, he received a MSc. in Artificial Intelligence from the University of Southampton (UK), supervised by Long Tran-Thanh, a MSc. in Applied Mathematics from the University of Glasgow (UK), supervised by Radostin Simitev, and finally a BSc. in Statistics and Actuarial-Financial Mathematics from the University of the Aegean (GR), supervised by Nikolaos Kavallaris.
Speaker | Title | Date |
---|---|---|
Nils Breuer | CAGE: Causality-Aware Shapley Value for Global Explanations | 21st November, 2024, 16:00 GMT |
Leila Methnani | TBC | 5th December, 2024, 16:00 GMT |