Explaining Explainability – the podcast on Explainable Artificial Intelligence.

How does the act of explanation, done by humans, work – and how can it be applied to AI systems? These questions are being investigated by the Collaborative Research Center “Constructing Explainability” at the universities of Bielefeld and Paderborn.

In this podcast we bring together the different disciplines: Two researchers from different fields discuss a concept around Explainable Artificial Intelligence (XAI) from their point of view. The discussion is moderated by Professor Britta Wrede, professor for Medical Assistance Systems at Bielefeld University and an expert on Explainable Artificial Intelligence.

Explaining Explainability – the podcast on Explainable Artificial Intelligence.

Latest episodes

Episode 6

Episode 6 "Looking Back on the First Funding Phase"

51m 11s

In this episode, TRR 318 speakers Professor Katharina Rohlfing from Paderborn University and Professor Philipp Cimiano from Bielefeld University reflect on the highlights, challenges, and surprises of the first funding phase. Together with moderator Professor Britta Wrede, they discuss how their perspective on co-construction has evolved over the past four years and share their outlook on the next steps and future directions for TRR 318.

Episode 5

Episode 5 "Scaffolding"

33m 24s

At TRR 318, scaffolding is recognized as a crucial aspect of explaining. It involves supporting learners, whether they are children acquiring their first words or robots interacting with caregivers. In the fifth episode of the podcast "Explaining Explainability," Prof. Britta Wrede engages with experts Dr. Angela Grimminger from Paderborn University and Prof. Dr.-Ing. Anna-Lisa Vollmer from Bielefeld University. They explore how scaffolding functions in various contexts, such as explanatory situations, and its role in their TRR 318 projects.

Episode 4

Episode 4 "Co-Construction"

47m 13s

As TRR 318 understands it, explanations are not delivered in a one-way process - they are co-constructed, by the explainer and the explainee. And it seems that Large Language Models (LLMs) are pretty good at doing just that - but do they really co-construct? And what is co-construction? Prof Britta Wrede discusses these questions with two experts on LLMs, Prof Axel Ngonga Ngomo from Paderborn University and Prof Henning Wachsmuth from Leibniz Universität Hannover.

Episode 3

Episode 3 "Understanding"

38m 37s

How do you know when someone has understood something? And how can explainers adapt their approach to promote better understanding? In this episode, Prof. Britta Wrede discusses these questions with Prof. Hendrik Buschmeier, a computational linguist at Bielefeld University, and Prof. Heike Buhl, a psychologist at Paderborn University. (Episode in German; English transcript available.)