Explainable AI Seminars @ Imperial

CLArg Group, Centre for eXplainable Artificial Intelligence, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships. XAI allows to deploy opaque data-driven AI methods safely in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (in particular psychology) and applications.
Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Coming up on 13th May, 16:00 BST

Patrick Altmeyer: "ECCCos from the Black Box: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals"

Get a Calendar Invite

Next Seminar

Patrick Altmeyer: "ECCCos from the Black Box: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals"

13th May, 16:00 BST, Imperial College London South Kensington Campus, Huxley Building LT308 (directions) and on Microsoft Teams (join online).


Abstract

Counterfactual explanations offer an intuitive and straightforward way to explain black-box models and offer algorithmic recourse to individuals. To address the need for plausible explanations, existing work has primarily relied on surrogate models to learn how the input data is distributed. This effectively reallocates the task of learning realistic explanations for the data from the model itself to the surrogate. Consequently, the generated explanations may seem plausible to humans but need not necessarily describe the behaviour of the black-box model faithfully. We formalise this notion of faithfulness through the introduction of a tailored evaluation metric and propose a novel algorithmic framework for generating Energy-Constrained Conformal Counterfactuals that are only as plausible as the model permits. Through extensive empirical studies, we demonstrate that ECCCo reconciles the need for faithfulness and plausibility. In particular, we show that for models with gradient access, it is possible to achieve state-of-the-art performance without the need for surrogate models. To do so, our framework relies solely on properties defining the black-box model itself by leveraging recent advances in energy-based modelling and conformal prediction. To our knowledge, this is the first venture in this direction for generating faithful counterfactual explanations. Thus, we anticipate that ECCCo can serve as a baseline for future research. We believe that our work opens avenues for researchers and practitioners seeking tools to better distinguish trustworthy from unreliable models.

Bio

Patrick is a PhD Candidate in Trustworthy Artificial Intelligence at Delft University of Technology working on the intersection of Computer Science and Finance. His current research revolves around Counterfactual Explanations and Probabilistic Machine Learning. Previously, Patrick worked as an Economist for the Bank of England. He is also an active contributor and member of the Julia community and have open-sourced a range of packages geared towards Trustworthy AI.

Planned Seminars

Speaker Title Date
Jacopo Taneggi & Beepul Bharti SHAP-XRT: The Shapley Value Meets Conditional Independence Testing 30th May, 2024, 14:00BST
Jacopo Bono DiConStruct: Causal Concept-based Explanations through Black-Box Distillation 18th June, 2024, 11:00BST
Nevin L. Zhang TBC 2nd July, 2024, 15:00BST

Past Seminars (since 2019)

Speaker Title Date Attendance
Eoin Delaney Counterfactual explanations for misclassified images: How human and machine explanations differ May 2, 2024 Over 25
David Watson LENS - Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice April 16, 2024 Over 25
Tom Nuno Wolf Towards interpretable neural networks for differential dementia diagnosis April 4, 2024 Over 30
Wonjoon Chang Understanding Learned Representations in Deep Neural Networks without Supervision March 21, 2024 Over 20
Daphne Odekerken Algorithms for transparent human-in-the-loop decision support at the Dutch police March 5, 2024 Over 35
Mihaela van der Shaar and Krzysztof Kacprzyk The Road to Transparent AI: Latest Breakthroughs in Time-Series Interpretability February 19, 2024 Over 130
Dhagash Mehta and Joshua Rosaler Towards Enhanced Local Explainability of Random Forests: a Proximity-Based Approach February 7, 2024 Over 20
Tim Miller Explainable AI is dead! Long live Explainable AI! Or: why your AI tool probably doesn’t work and why it is so *#$# hard to get it to do so January 23, 2024 Over 45
Xuehao Zhai Post-hoc explanations for AI in transport planning December 14, 2023 Over 15
Marcelo Finger Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project November 30, 2023 Over 25
Pranava Madhyastha Explanations and robustness in Multimodal Language Models November 16, 2023 Over 15
Nirmalie Wiratunga Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Explainable AI for Users by Users November 2, 2023 Over 20
Francesco Leofante Robust Explainable AI: the Case of Counterfactual Explanations October 27, 2023 Over 40
Son Tran The Model Reconciliation Problem and Explainable AI June 21, 2023 Over 25
Luciano Serafini Learning and inference with hybrid models (three examples) June 06, 2023 Over 25
Matthew Wicker On the Synergy between Robustness and Explainability May 23, 2023 Over 30
Julius von Kügelgen Backtracking Counterfactuals May 3, 2023 Over 20
Gerard Canal Explanations for planning robots: verbal causal narrations of plans and proactivity in explanations March 24, 2023 Over 10
Ana Ozaki Querying Neural Networks: The BERT Case March 1, 2023 Over 20
Pietro Totis Reasoning on Arguments and Beliefs with Probabilistic Logic Programs December 8, 2022 Over 25
Eleonora Giunchiglia Deep Learning with Hard Logical Constraints November 23, 2022 Over 25
Emiliano Lorini Non-Classical Logics for Explanations in AI Systems November 9, 2022 Over 25
Cor Steging Responsible AI: Towards a Hybrid Method for Evaluating Data-Driven Decision-making October 26, 2022 Over 20
Gopal Gupta Automating Commonsense Reasoning October 14, 2022 Over 20
Benjamin Grosof Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning October 5, 2022 Over 30
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 Over 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 Over 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 Over 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 Over 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 Over 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 Over 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 Over 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 Over 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 Over 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 Over 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 Over 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 Over 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 Over 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 Over 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks:

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Guilherme Paulino-Passos       Fabrizio Russo