Explainable AI Seminars @ Imperial

CLArg Group, Centre for eXplainable Artificial Intelligence, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships. XAI allows to deploy opaque data-driven AI methods safely in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (in particular psychology) and applications.
Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Next Seminar: 17th October, 16:00 BST

Stylianos Loukas Vasileiou - "Explainable Decision-Making: From Formal Logic to AI Systems with Explainable Behavior"


Get invites in your inbox - Sign up here

Next Seminar

Stylianos Loukas Vasileiou - "Explainable Decision-Making: From Formal Logic to AI Systems with Explainable Behavior"

17th October, 16:00 BST, on Microsoft Teams (join online).


Abstract

As AI systems increasingly permeate our daily lives, the need for ensuring their decision-making processes are explainable has become paramount. In the first part of the talk, I will argue that explanation generation frameworks based on formal logic can serve as an explainability layer atop AI systems, capable of generating rigorous, flexible, and human-aware explanations across diverse domains. Particularly, I will present a general logic-based framework that models the explanation problem as one of reconciling the knowledge bases of the AI system and the human user when discrepancies between the system's decision and the user's expectations exist. I will show how this framework can be augmented and extended to handle various settings, from single-shot explanations to dialectical explanations. In the second part of the talk, I will discuss my recent work on developing synergistic AI systems that combine the soundness of logic and the human-like language generation capabilities of Large Language Models (LLMs). Specifically, I will showcase a prototype of an end-to-end explainable system for explaining coursework planning to WashU CSE students. This system treats LLMs as "linguistic user interfaces" for facilitating natural and conversational interactions with human users, while employing a logic-based explanation generation framework for generating sound explanations. Such hybrid systems have the advantage of ensuring that their outputs are not only provably correct and robust, but also tailored to the human user's preferences and level of understanding.

Bio

Stylianos is a PhD candidate at the Department of Computer Science and Engineering at Washington University in St. Louis, supervised by the amazing William Yeoh. His current research interest is centered on developing human-aware decision-making frameworks for enabling and enhancing human-AI collaboration. Being this an inherently interdisciplinary subject, it requires drawing ideas from a confluence of approaches such as knowledge representation & reasoning, planning, machine learning, as well as philosophy, cognitive science, and psychology, which is what he has been exploring in his PhD. Within a preceding historical epoch, he received a MSc. in Artificial Intelligence from the University of Southampton (UK), supervised by Long Tran-Thanh, a MSc. in Applied Mathematics from the University of Glasgow (UK), supervised by Radostin Simitev, and finally a BSc. in Statistics and Actuarial-Financial Mathematics from the University of the Aegean (GR), supervised by Nikolaos Kavallaris.

Planned Seminars

Speaker Title Date
Nils Breuer CAGE: Causality-Aware Shapley Value for Global Explanations 21st November, 2024, 16:00 GMT
Leila Methnani TBC 5th December, 2024, 16:00 GMT

Past Seminars (since 2019)

Speaker Title Date Attendance
Nevin L. Zhang Two-Stage Holistic and Contrastive Explanation of Image Classification July 2, 2024 Over 20
Jacopo Bono DiConStruct: Causal Concept-based Explanations through Black-Box Distillation June 20, 2024 Over 15
Jacopo Teneggi & Beepul Bharti SHAP-XRT: The Shapley Value Meets Conditional Independence Testing May 30, 2024 Over 25
Patrick Altmeyer ECCCos from the Black Box: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals May 13, 2024 Over 25
Eoin Delaney Counterfactual explanations for misclassified images: How human and machine explanations differ May 2, 2024 Over 25
David Watson LENS - Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice April 16, 2024 Over 25
Tom Nuno Wolf Towards interpretable neural networks for differential dementia diagnosis April 4, 2024 Over 30
Wonjoon Chang Understanding Learned Representations in Deep Neural Networks without Supervision March 21, 2024 Over 20
Daphne Odekerken Algorithms for transparent human-in-the-loop decision support at the Dutch police March 5, 2024 Over 35
Mihaela van der Shaar and Krzysztof Kacprzyk The Road to Transparent AI: Latest Breakthroughs in Time-Series Interpretability February 19, 2024 Over 130
Dhagash Mehta and Joshua Rosaler Towards Enhanced Local Explainability of Random Forests: a Proximity-Based Approach February 7, 2024 Over 20
Tim Miller Explainable AI is dead! Long live Explainable AI! Or: why your AI tool probably doesn’t work and why it is so *#$# hard to get it to do so January 23, 2024 Over 45
Xuehao Zhai Post-hoc explanations for AI in transport planning December 14, 2023 Over 15
Marcelo Finger Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project November 30, 2023 Over 25
Pranava Madhyastha Explanations and robustness in Multimodal Language Models November 16, 2023 Over 15
Nirmalie Wiratunga Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Explainable AI for Users by Users November 2, 2023 Over 20
Francesco Leofante Robust Explainable AI: the Case of Counterfactual Explanations October 27, 2023 Over 40
Son Tran The Model Reconciliation Problem and Explainable AI June 21, 2023 Over 25
Luciano Serafini Learning and inference with hybrid models (three examples) June 06, 2023 Over 25
Matthew Wicker On the Synergy between Robustness and Explainability May 23, 2023 Over 30
Julius von Kügelgen Backtracking Counterfactuals May 3, 2023 Over 20
Gerard Canal Explanations for planning robots: verbal causal narrations of plans and proactivity in explanations March 24, 2023 Over 10
Ana Ozaki Querying Neural Networks: The BERT Case March 1, 2023 Over 20
Pietro Totis Reasoning on Arguments and Beliefs with Probabilistic Logic Programs December 8, 2022 Over 25
Eleonora Giunchiglia Deep Learning with Hard Logical Constraints November 23, 2022 Over 25
Emiliano Lorini Non-Classical Logics for Explanations in AI Systems November 9, 2022 Over 25
Cor Steging Responsible AI: Towards a Hybrid Method for Evaluating Data-Driven Decision-making October 26, 2022 Over 20
Gopal Gupta Automating Commonsense Reasoning October 14, 2022 Over 20
Benjamin Grosof Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning October 5, 2022 Over 30
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 Over 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 Over 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 Over 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 Over 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 Over 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 Over 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 Over 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 Over 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 Over 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 Over 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 Over 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 Over 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 Over 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 Over 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks:

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Guilherme Paulino-Passos       Fabrizio Russo