Explainable AI Seminars @ Imperial

CLArg Group, Department of Computing, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships whereby (potentially opaque) data-driven AI methods can be intelligibly and safely deployed by humans in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (and in particular psychology) and applications.

Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Coming up

Oana Camburu: Neural Networks with Natural Language Explanations

Thursday 26th May, 15:00 BST

Click here to learn more about the seminar and here to join the event online.


Don't miss the next one! Subscribe to our Newsletter here

Next Seminar

Oana Camburu: Neural Networks with Natural Language Explanations

Thursday 26th May, 15:00 BST, Microsoft Teams: Link


Abstract

In order for machine learning to garner widespread public adoption, models must be able to provide human-understandable explanations for their decisions. In this talk, we will focus on the emerging direction of building neural networks that learn from natural language explanations at training time and generate such explanations at testing time. We will start with an extension of the Stanford Natural Language Inference (SNLI) dataset with an additional layer of human-written natural language explanations for the inference relations, called e-SNLI. We will see different types of architectures that incorporate these explanations into their training process and generate them at testing time. We will further see a similar approach for vision-language models, where we introduce e-SNLI-VE, a large dataset of visual-textual entailment with natural language explanations. We will also see e-ViL, a benchmark for natural language explanations in vision-language tasks, and e-UG, the SOTA model for natural language explanation generation on such tasks. These large datasets of explanations open up a range of research directions for using natural language explanations both for improving models and for asserting their trust. However, models trained on such datasets may nonetheless generate inconsistent explanations. An adversarial framework for sanity checking models over generating such inconsistencies will be presented. Finally, we will see RExC, an architecture that grounds both prediction and explanations into background knowledge. RExC improves over the previous methods by: (i) providing two types of explanations while existing models usually provide only one type, and (ii) beating by a large margin the previous SOTA in terms of the quality of its explanations.

References:

Short Bio

Oana is a Research Fellow in the Department of Computer Science at the University College London, holding an Early Career Leverhulme Fellowship. Prior to this, Oana was a postdoc in the Department of Computer Science at the University of Oxford, from where she also obtained her PhD with the thesis "Explaining Deep Neural Networks". Her main research interests lie in explainability for deep learning with applications in both natural language processing and computer vision, for which she received several fellowships and grants.

Past Seminars

Speaker Title Date Attendance
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks: 36

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

David Tuckey       Antonio Rago       Fabrizio Russo