Explainable AI Seminars @ Imperial

CLArg Group, Centre for eXplainable Artificial Intelligence, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships. XAI allows to deploy opaque data-driven AI methods safely in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (in particular psychology) and applications.
Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Next Seminar: 2nd July, 15:00 BST

Noam Levi - Universality and Reductionism in Deep Learning


Get invites in your inbox - Sign up here

Next Seminar

Noam Levi - Universality and Reductionism in Deep Learning

2nd July, 15:00 BST, Imperial College London South Kensington Campus, Skempton 163 (directions) and on Microsoft Teams (join online).


Abstract

How is it that we can describe the complex universe using simple, fundamental rules? The equivalence between vastly different, complex physical systems, when observed from afar, enables us to make accurate predictions without analyzing their microscopic details. Conversely, by reducing such systems to their minimal constituents, we can uncover phenomena that would otherwise seem inscrutable. In this talk, I will explore how these concepts of universality and reductionism extend beyond the natural universe to the synthetic world of neural networks, focusing on a surprising phenomenon that can arise when training deep neural networks: Grokking, or delayed generalization. First, I will demonstrate that Grokking can occur even in seemingly trivial models without hidden layers or nonlinearities when they are near a “generalization threshold.” Next, I will show that these simple systems can be analyzed analytically, revealing that the Grokking phenomenon stems from the system approaching a critical point. Finally, I will discuss how Grokking might be understood as a manifestation of a broader characteristic of neural networks trained near criticality and outline prospects for future research.

Bio

Noam is a AI4Science Fellow at EPFL, affiliated with the Laboratory for Information and Inference Systems hosted by Prof. Volkan Cehver and the Laboratory of Astrophysics, hosted by Prof. Jean Paul Kneib. His research focuses on the intersection of theoretical physics and machine learning. Namely, applying tools from high energy and statistical physics to study machine learning algorithms as well as applying ML to advance the physical sciences. Noam received his double B.Sc. in Physics and Electrical Engineering in 2015, followed by his M.Sc. in 2017 and Ph.D. in Theoretical Particle Physics and Cosmology in 2024 from Tel Aviv University in Israel, under the supervision of Prof. Tomer Volansky.

Planned Seminars

Date Speaker
24th July, 2025, 16:00 BST Thomas Icard

Past Seminars (since 2019)

Speaker Title Date Attendance
Paul Kobialka Counterfactual Strategies for Markov Decision Processes June 18, 2025 Over 20
Elke Kirschbaum Beyond Single-Feature Importance with ICECREAM June 11, 2025 Over 15
Milad Kazemi and Jess Lally Robust Counterfactual Inference in Markov Decision Processes May 28, 2025 Over 25
Jannis Kurtz Robust Counterfactual Explanations and Mathematical Optimization May 14, 2025 Over 20
Jesús Renero Causal Discovery based on ML and Shapley values April 30, 2025 Over 25
Lei You Harnessing Transport Theory for PostHoc Explanations of Machine Learning April 7, 2025 Over 25
André Freitas Controlling Natural Language Inference over LLMs March 26, 2025 Over 35
Elena Musi LLMs Explanations via argumentative prompting March 12, 2025 Over 45
Laura State REASONX: a declarative explanation tool February 27, 2025 Over 25
Marco Valentino Reasoning with Natural Language Explanations: From Epistemology to Neuro-Symbolic AI February 13, 2025 Over 30
Nina Pardal The Distributional Uncertainty of the SHAP score in Explainable Machine Learning January 29, 2025 Over 30
André Artelt "Who Dunit?" -- Tracing Explanations back to Training Samples December 16, 2024 Over 25
Leila Methnani XAI is a double-edged sword November 21, 2024 Over 20
Nils Breuer CAGE: Causality-Aware Shapley Value for Global Explanations November 21, 2024 Over 20
Stylianos Loukas Vasileiou Explainable Decision-Making: From Formal Logic to AI Systems with Explainable Behavior October 17, 2024 Over 35
Nevin L. Zhang Two-Stage Holistic and Contrastive Explanation of Image Classification July 2, 2024 Over 20
Jacopo Bono DiConStruct: Causal Concept-based Explanations through Black-Box Distillation June 20, 2024 Over 15
Jacopo Teneggi and Beepul Bharti SHAP-XRT: The Shapley Value Meets Conditional Independence Testing May 30, 2024 Over 25
Patrick Altmeyer ECCCos from the Black Box: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals May 13, 2024 Over 25
Eoin Delaney Counterfactual explanations for misclassified images: How human and machine explanations differ May 2, 2024 Over 25
David Watson LENS - Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice April 16, 2024 Over 25
Tom Nuno Wolf Towards interpretable neural networks for differential dementia diagnosis April 4, 2024 Over 30
Wonjoon Chang Understanding Learned Representations in Deep Neural Networks without Supervision March 21, 2024 Over 20
Daphne Odekerken Algorithms for transparent human-in-the-loop decision support at the Dutch police March 5, 2024 Over 35
Mihaela van der Shaar and Krzysztof Kacprzyk The Road to Transparent AI: Latest Breakthroughs in Time-Series Interpretability February 19, 2024 Over 130
Dhagash Mehta and Joshua Rosaler Towards Enhanced Local Explainability of Random Forests: a Proximity-Based Approach February 7, 2024 Over 20
Tim Miller Explainable AI is dead! Long live Explainable AI! Or: why your AI tool probably doesn’t work and why it is so *#$# hard to get it to do so January 23, 2024 Over 45
Xuehao Zhai Post-hoc explanations for AI in transport planning December 14, 2023 Over 15
Marcelo Finger Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project November 30, 2023 Over 25
Pranava Madhyastha Explanations and robustness in Multimodal Language Models November 16, 2023 Over 15
Nirmalie Wiratunga Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Explainable AI for Users by Users November 2, 2023 Over 20
Francesco Leofante Robust Explainable AI: the Case of Counterfactual Explanations October 27, 2023 Over 40
Son Tran The Model Reconciliation Problem and Explainable AI June 21, 2023 Over 25
Luciano Serafini Learning and inference with hybrid models (three examples) June 06, 2023 Over 25
Matthew Wicker On the Synergy between Robustness and Explainability May 23, 2023 Over 30
Julius von Kügelgen Backtracking Counterfactuals May 3, 2023 Over 20
Gerard Canal Explanations for planning robots: verbal causal narrations of plans and proactivity in explanations March 24, 2023 Over 10
Ana Ozaki Querying Neural Networks: The BERT Case March 1, 2023 Over 20
Pietro Totis Reasoning on Arguments and Beliefs with Probabilistic Logic Programs December 8, 2022 Over 25
Eleonora Giunchiglia Deep Learning with Hard Logical Constraints November 23, 2022 Over 25
Emiliano Lorini Non-Classical Logics for Explanations in AI Systems November 9, 2022 Over 25
Cor Steging Responsible AI: Towards a Hybrid Method for Evaluating Data-Driven Decision-making October 26, 2022 Over 20
Gopal Gupta Automating Commonsense Reasoning October 14, 2022 Over 20
Benjamin Grosof Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning October 5, 2022 Over 30
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 Over 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 Over 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 Over 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 Over 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 Over 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 Over 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 Over 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 Over 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 Over 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 Over 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 Over 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 Over 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 Over 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 Over 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks:

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Gabriel Freedman       Fabrizio Russo