Explainable AI Seminars @ Imperial

CLArg Group, Department of Computing, Imperial College London


XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships whereby (potentially opaque) data-driven AI methods can be intelligibly and safely deployed by humans in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction, the social sciences (and in particular psychology) and applications.

Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This seminar series focuses on all aspects of XAI, ranging from methods to applications.

Coming up

Thursday 30th November, 16:00 GMT - Prof. Marcelo Finger : Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project

Click here to learn more about the seminars and here to join the event online.


Don't miss the next one, subscribe to our Newsletter here!

Next Seminars

Prof. Marcelo Finger: Adventures in Natural Language Processing in Brazil during the Pandemic: The SPIRA Project

Thursday 30thNovember, 16:00 GMT on Microsoft Teams (join online).


Abstract

At the irruption of the COVID-19 pandemic, a group of researchers put together the SPIRA Project to collect, analyze and automatically detect respiratory insufficiency from voice audios of Brazilian Portuguese using cellphones. It was, by construction, a multidisciplinary initiative, consisting of computer scientists, medical doctors, speech therapists and linguists, which was conceived to study both big data analysis of audio via several machine learning methods, as well as small data analysis of acoustic properties of patients and control voice signals. Instead of detecting COVID-19 infection, the project aimed at detecting the main condition associated with the disease that forces people to be hospitalized, taking the point of view of patient triage. This view was motivated by the fact that SARS-CoV-2 infection was commonly associated with silent hypoxia, a condition in which there is a low oxygen saturation in the blood but the patient does not feel breathless. The project obtained 96% of accuracy in respiratory insufficiency detection using a Transformer neural network, plus a detailed audio analysis of fundamental frequency related parameters in Brazilian with COVID-19 and a characterization of pauses as a speech biomarker for COVID. Audio-processing techniques developed by the project obtained first prizes in competitions for COVID-19 detection in cough audios and in speech emotion recognition.

Bio

Marcelo Finger has a BSc in Electronic Engineering from Universidade de São Paulo (EP-USP 1988), MSc in Foundations of Advanced Information Technology from the Imperial College of Science, Technology and Medicine (1990) and PhD in Computing from Imperial College of Science and Technology, University of London (1994). He ha held visiting positions in departments of Computer Science at Universitée Paul Sabatier - Toulouse (2011) and Cornell University (2012-2013). He is currentely Professor of Computer Sciece at the Department of Computer Science, Institute of Mathematics and Statistics at the University of São Paulo, Principal Investigator at the USP-Fapesp-IBM Center for Artificial Intelligence (C4AI), where he coordinates the NLP2 group on natural language processing in Portuguese. He is part of the editorial board of the following journals: South American Journal of Logic, São Paulo Journal of Mathematical Sciences and has been a guest editor at Theoretical Computer Science and Anals of Mathematics in Artificial Intelligence. He as been acting as an expert in the Computer Sciece, focusing on Logics and Dedecutive-Probabilistic Reasoning, acting as a researcher in the following subjects: logic, artificial intelligence, Digital Humanities and computational linguistics.

Past Seminars (since 2019)

Speaker Title Date Attendance
Pranava Madhyastha Explanations and robustness in Multimodal Language Models November 16, 2023 Over 15
Nirmalie Wiratunga Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Explainable AI for Users by Users November 2, 2023 Over 20
Francesco Leofante Robust Explainable AI: the Case of Counterfactual Explanations October 27, 2023 Over 40
Son Tran The Model Reconciliation Problem and Explainable AI June 21, 2023 Over 25
Luciano Serafini Learning and inference with hybrid models (three examples) June 06, 2023 Over 25
Matthew Wicker On the Synergy between Robustness and Explainability May 23, 2023 Over 30
Julius von Kügelgen Backtracking Counterfactuals May 3, 2023 Over 20
Gerard Canal Explanations for planning robots: verbal causal narrations of plans and proactivity in explanations March 24, 2023 Over 10
Ana Ozaki Querying Neural Networks: The BERT Case March 1, 2023 Over 20
Pietro Totis Reasoning on Arguments and Beliefs with Probabilistic Logic Programs December 8, 2022 Over 25
Eleonora Giunchiglia Deep Learning with Hard Logical Constraints November 23, 2022 Over 25
Emiliano Lorini Non-Classical Logics for Explanations in AI Systems November 9, 2022 Over 25
Cor Steging Responsible AI: Towards a Hybrid Method for Evaluating Data-Driven Decision-making October 26, 2022 Over 20
Gopal Gupta Automating Commonsense Reasoning October 14, 2022 Over 20
Benjamin Grosof Towards Stronger Hybrid AI: Combining Extended Logic Programs with Natural Language and Machine Learning October 5, 2022 Over 30
Lun AI Effects of machine-learned logic theories on human comprehension in machine-human teaching July 13, 2022 Over 10
Dylan Slack Exposing Shortcomings and Improving the Reliability of ML Models June 22, 2022 Over 15
Joao Leite Logic-based Explanations for Neural Networks June 9, 2022 Over 15
Oana Camburu Neural Networks with Natural Language Explanations May 26, 2022 Over 15
Nino Scherrer Learning Neural Causal Models with Active Interventions April 27, 2022 Over 10
Hamed Ayoobi Explain What You See: Argumentation-Based Learning for 3D Object Recognition April 6, 2022 Over 10
Mattia Setzu Breaking the Local/Global explanation dichotomy: GLocalX and the Local to Global explanation paradigm March 17, 2022 Over 10
Eoin Kenny Explaining Black Box Classifiers via Post-Hoc Explanation-by-Example: Factual, Semi-Factual, and Counterfactual Explanations March 2, 2022 Over 20
Riccardo Crupi Counterfactual Explanations as Interventions in Latent Space February 14, 2022 Over 20
Martin Jullum Prediction explanation with Shapley values February 3, 2022 Over 15
Ioannis Votsis The Study of Reasoning in Philosophy, Psychology and AI: In Search of Synergies January 19, 2022 Over 15
Michael Yeomans Conversational Receptiveness: Improving Engagement with Opposing Views December 13, 2021 Over 10
Piyawat Lertvittayakumjorn Explanation-Based Human Debugging of NLP Models December 1, 2021 Over 15
Leila Amgoud Explaining Black-Box Classifiers: Properties and Functions November 24, 2021 Over 40
Guilherme Paulino-Passos Monotonicity, Noise-Tolerance, and Explanations in Case-Based Reasoning with Abstract Argumentation November 10, 2021 Over 10
- explAIn Workshop: Exploring the links between Explainable AI, Causality and Persuasion July 8, 2021 Over 30
Fabrizio Silvestri Counterfactual Explanations of (some) Machine Learning Models June 9, 2021 Over 15
Marek Sergot Actual cause and chancy causation in 'stit' logics June 2, 2021 Over 15
Menna El-Assady Visual Analytics Perspectives on Interactive and Explainable Machine Learning May 5, 2021 Over 25
Tony Hunter Overview of Computational Persuasion and relationships with Explainable AI April 14, 2021 Over 20
Timotheus Kampik Principle-based and Explainable Reasoning: From Humans to Machines March 17, 2021 Over 10
Umang Bhatt Practical Approaches to Explainable Machine Learning February 24, 2021 Over 30
Riccardo Guidotti Exploiting Auto-Encoders for Explaining Black Box Classifiers January 27, 2021 Over 40
Kacper Sokol Modular Machine Learning Interpretability: A Case Study of Surrogate Explainers December 9, 2020 Over 15
Hana Chockler Why do things go wrong (or right)? November 25, 2020 Over 40
Emanuele Albini Relation-based conterfactual explanations for Bayesian classifiers November 4, 2020 Over 40
Jannes Klass Explainable AI in the Wild: Lessons learned from applying explainable AI to real world use cases March 4, 2020 Over 15
Brent Mittelstadt Governance of AI through explanations: From approximations to counterfactuals February 12, 2020 Over 20
Claudia Schulz Explaining (how to improve) Diagnostic Reasoning January 15, 2020 Over 30
Erisa Karafili Helping Forensics Analysts to Understand and Attribute Cyber-Attacks December 4, 2019 Over 15
Pasquale Minervini Explainable, Data-Efficient, Verifiable Representation Learning in Knowledge Graphs November 27, 2019 Over 15
Adam White Measurable Counterfactual Explanations for Any Classifier November 13, 2019 Over 15
Loizos Michael Acceptable Explanations through Machine Coaching November 7, 2019 Over 15
Dave Braines Conversational Explanations – Explainable AI through human-machine conversation October 14, 2019 Over 25
Zohreh Shams Explanation in Ontology Reasoning October 1, 2019 Over 25
Filip Radlinski User-Centric Recommendation July 4, 2019 Over 25
Sanjay Modgil Dialectical Formalizations of Non-monotonic Reasoning: Rationality under Resource Bounds June 20, 2019 Over 20
Daniele Magazzeni Model-Based Reasoning for Explainable AI Planning as a Service May 30, 2019 Over 25
Christos Bechlivanidis Concrete and Abstract Explanations May 20, 2019 Over 20
Simone Stumpf Making Human-Centred Machine Intelligence Intelligible April 15, 2019 Over 20
Ken Satoh ContractFrames: Bridging the gap between Natural Language and Logics in Contract Law March 25, 2019 Over 20
Number of talks:

Contact Information

Email: xai-CO@groups.imperial.ac.uk

Organisers

Guilherme Paulino-Passos       Fabrizio Russo