Explainable AI Seminars @ Imperial

CLArg Group, Centre for eXplainable Artificial Intelligence, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships. XAI allows to deploy opaque data-driven AI methods safely in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (in particular psychology) and applications.
Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Next Seminar: 21st November, 16:00 BST

Nils Breuer - "CAGE: Causality-Aware Shapley Value for Global Explanations"


Get invites in your inbox - Sign up here

Next Seminar

Nils Breuer - "CAGE: Causality-Aware Shapley Value for Global Explanations"

21st November, 16:00 BST, on Microsoft Teams (join online).


Abstract

As Artificial Intelligence (AI) is having more influence on our everyday lives, it becomes important that AI-based decisions are transparent and explainable. As a consequence, the field of eXplainable AI (or XAI) has become popular in recent years. One way to explain AI models is to elucidate the predictive importance of the input features for the AI model in general, also referred to as global explanations. Inspired by cooperative game theory, Shapley values offer a convenient way for quantifying the feature importance as explanations. However many methods based on Shapley values are built on the assumption of feature independence and often overlook causal relations of the features which could impact their importance for the ML model. Inspired by studies of explanations at the local level, we propose CAGE (Causally-Aware Shapley Values for Global Explanations). In particular, we introduce a novel sampling procedure for out-coalition features that respects the causal relations of the input features. We derive a practical approach that incorporates causal knowledge into global explanation and offers the possibility to interpret the predictive feature importance considering their causal relation. We evaluate our method on synthetic data and real-world data. The explanations from our approach suggest that they are not only more intuitive but also more faithful compared to previous global explanation methods.

Bio

After a bachelor's degree in cognitive science at the University of Tübingen and a master's degree in artificial intelligence at the Vrije Universiteit Amsterdam, I am now a PhD student at the Technical University Berlin supervised by Prof. Dr. Dr. Sahin Albayrak. My research focus lies between the topics of causality and interpretable AI. I investigate how methods of causality and interpretability are linked and how they can benefit from each other. In addition, I am investigating in a government-funded research project called goKI (German abbreviation for artificial intelligence for the common good) how common good-oriented AI applications can be made transparent and explainable for a diverse and broad group of people.

Planned Seminars

Speaker Title Date
Leila Methnani TBC 5th December, 2024, 16:00 GMT

Past Seminars (since 2019)

Speaker Title Date Attendance
Stylianos Loukas Vasileiou Explainable Decision-Making: From Formal Logic to AI Systems with Explainable Behavior October 17, 2024 Over 35
Nevin L. Zhang Two-Stage Holistic and Contrastive Explanation of Image Classification July 2, 2024 Over 20
Jacopo Bono DiConStruct: Causal Concept-based Explanations through Black-Box Distillation June 20, 2024 Over 15
Jacopo Teneggi & Beepul Bharti SHAP-XRT: The Shapley Value Meets Conditional Independence Testing May 30, 2024 Over 25
Patrick Altmeyer ECCCos from the Black Box: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals May 13, 2024 Over 25
Eoin Delaney Counterfactual explanations for misclassified images: How human and machine explanations differ May 2, 2024 Over 25
David Watson LENS - Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice April 16, 2024 Over 25
Tom Nuno Wolf Towards interpretable neural networks for differential dementia diagnosis April 4, 2024 Over 30
Wonjoon Chang Understanding Learned Representations in Deep Neural Networks without Supervision March 21, 2024 Over 20
Daphne Odekerken Algorithms for transparent human-in-the-loop decision support at the Dutch police March 5, 2024 Over 35
Mihaela van der Shaar and Krzysztof Kacprzyk The Road to Transparent AI: Latest Breakthroughs in Time-Series Interpretability February 19, 2024 Over 130
Dhagash Mehta and Joshua Rosaler Towards Enhanced Local Explainability of Random Forests: a Proximity-Based Approach February 7, 2024 Over 20
Tim Miller Explainable AI is dead! Long live Explainable AI! Or: why your AI tool probably doesn’t work and why it is so *#$# hard to get it to do so January 23, 2024 Over 45
Xuehao Zhai Post-hoc explanations for AI in transport planning December 14, 2023 Over 15
Marcelo Finger Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project November 30, 2023 Over 25
Pranava Madhyastha Explanations and robustness in Multimodal Language Models November 16, 2023 Over 15
Nirmalie Wiratunga Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Explainable AI for Users by Users November 2, 2023 Over 20
Francesco Leofante Robust Explainable AI: the Case of Counterfactual Explanations October 27, 2023 Over 40
Son Tran The Model Reconciliation Problem and Explainable AI June 21, 2023 Over 25
Luciano Serafini Learning and inference with hybrid models (three examples) June 06, 2023 Over 25
Matthew Wicker On the Synergy between Robustness and Explainability May 23, 2023 Over 30
Julius von Kügelgen Backtracking Counterfactuals May 3, 2023 Over 20
Gerard Canal Explanations for planning robots: verbal causal narrations of plans and proactivity in explanations March 24, 2023 Over 10
Ana Ozaki Querying Neural Networks: The BERT Case March 1, 2023 Over 20
Pietro Totis Reasoning on Arguments and Beliefs with Probabilistic Logic Programs December 8, 2022 Over 25
Eleonora Giunchiglia Deep Learning with Hard Logical Constraints November 23, 2022 Over 25
Emiliano Lorini Non-Classical Logics for Explanations in AI Systems November 9, 2022 Over 25
Cor Steging Responsible AI: Towards a Hybrid Method for Evaluating Data-Driven Decision-making October 26, 2022 Over 20
Gopal Gupta Automating Commonsense Reasoning October 14, 2022 Over 20
Benjamin Grosof Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning October 5, 2022 Over 30
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 Over 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 Over 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 Over 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 Over 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 Over 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 Over 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 Over 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 Over 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 Over 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 Over 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 Over 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 Over 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 Over 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 Over 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks:

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Guilherme Paulino-Passos       Fabrizio Russo