Explainable AI Seminars @ Imperial

CLArg Group, Department of Computing, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships whereby (potentially opaque) data-driven AI methods can be intelligibly and safely deployed by humans in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (and in particular psychology) and applications.

Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Coming up

Benjamin Grosof: Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning

Tuesday 4th October, 12:00 BST

Click here to learn more about the seminar and here to join the event online.


Don't miss the next one! Subscribe to our Newsletter here

Next Seminar

Benjamin Grosof: Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning

Tuesday 4th October, 12:00 BST, Hybrid - Huxley Building, Room 144, Imperial College London and Microsoft Teams: Link


Abstract

A key challenge for AI overall is to improve neuro-symbolic AI and its super-class, hybrid AI. We discuss how and why to combine more tightly the core areas of knowledge representation & reasoning (KRR), including logic, natural language (NL), machine learning (ML), and several flavors of uncertainty. We formulate several goals for such hybrid AI, and focus on strengthening three aspects. One is expressiveness, including for complex knowledge in NL written by humans and in results produced by ML. A second aspect is scalability, including socially across a diverse set of people/organizations. A third aspect is explainability, including for non-programmers. We present our approach based on extended logic programs (ELP). The approach is largely implemented: as the ErgoAI system. The approach addresses the hybrid-AI goals to a substantial degree, but much more remains to be done. We discuss some related work, and sketch several promising future directions for the field. (NB: Joint work with others including at Coherent Knowledge.

Short Bio

Benjamin Grosof, PhD, is Co-Founder and Chief Scientist at Coherent Knowledge, an AI startup that provides highly explainable decision support via query answering. He is an industry leader in the theory and practice of how to represent, reason with, and acquire, knowledge – including how to combine logical knowledge representation & reasoning (KRR) with machine learning (ML) and natural language (NL). He has pioneered technology and industry standards for expressively flexible semantic rules combined with ontologies and knowledge graphs, their acquisition from natural language, and a wide variety of applications including in finance, legal & policy, e-commerce, health care & life science, defense & security, and helpdesk. In particular, he has led invention of declarative logic programs that extend databases scalably with powerful implications and meta-logic features such as probabilities, higher-order syntax, bounded rationality (restraint), and exceptions/argumentation (defeasibility). Previously, he was a technical/research executive in AI at: the Allen Institute for AI’s predecessor; Accenture; and Kyndi, a venture-backed AI startup. Earlier, he was a MIT Sloan professor in IT, and an IBM Research scientist. His background includes a part-time expert consulting practice, Stanford PhD in computer science (specialty AI), Harvard BA in applied mathematics, 60+ refereed publications, 10,000+ citations, 5 patents, 2 W3C standards, and 5 major industry software products.

Past Seminars

Speaker Title Date Attendance
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks: 40

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Nico Potyka       Fabrizio Russo