Episode 3 "Understanding"
Show notes
How do you know when someone has understood something? And how can explainers adapt their approach to promote better understanding? In this episode, Prof. Britta Wrede discusses these questions with Prof. Hendrik Buschmeier, a computational linguist at Bielefeld University, and Prof. Heike Buhl, a psychologist at Paderborn University. (Episode in German; English transcript available.)
Show transcript
00:00:03: Audio Clip: Explaining explainability – in this podcast, we bring together different disciplines from the collaborative research center "Constructing Explainability." Two researchers discuss a concept related to explainability from their perspectives. Moderated by Professor Britta Wrede.
00:00:26: Prof. Dr. Britta Wrede: Yes, hello and welcome to the third episode of our podcast. After discussing explaining and explainability in the first two episodes, today's topic is a rather large one — understanding. When explaining something to someone, it is important that the listener actively engages. That is also a fundamental principle of our Transregio, that they provide feedback on whether they have understood, whether they can follow, or if they have any questions.
00:00:54: Prof. Dr. Britta Wrede: So, one must somehow determine whether the other person has understood. Then there is the next step, which is not so simple either—the explainer must adapt. So, we have both understanding and adaptation, and we have two guests today who examine understanding from different perspectives. First, we have Professor Dr. Hendrik Buschmeier from computational linguistics, who works in Project A02 on understanding processes and how to recognize that someone has understood something—possibly even how a system could recognize it.
00:01:29: Audio Clip: Hendrik Buschmeier is a Junior Professor for Digital Linguistics at Bielefeld University. He earned his PhD in computer science from the Faculty of Technology at Bielefeld University and was part of the CITEC research group Social Cognitive Systems. His research focuses on empirical and computational modeling of dialogue phenomena, language and multimodality, as well as conversational interaction between humans and between humans and artificial agents.
00:01:59: Prof. Dr. Britta Wrede: Hello Hendrik, great to have you here.
00:02:01: Prof. Dr. Hendrik Buschmeier: Hello Britta.
00:02:02: Prof. Dr. Britta Wrede: On the other side, we have Professor Dr. Heike Buhl, who researches from the perspective of educational psychology how people can adapt to understanding. Psychology often carries the aura of being able to see into people's minds. In Project A01, she investigates how adaptive explaining works.
00:02:26: Audio Clip: Heike Buhl is a Professor of Educational Psychology and Developmental Psychology at Paderborn University. Her research focuses on communication and the underlying cognition, which she studies in various learning environments, family relationships, cooperation between home and school, and teacher education. As a key success factor in communication, she considers perspective-taking and the so-called partner model—the mental representation that conversation partners form of each other, for example, in explaining contexts.
00:03:05: Prof. Dr. Britta Wrede: Hello Heike, great to have you here.
00:03:06: Prof. Dr. Heike Buhl: Hello.
00:03:07: Prof. Dr. Britta Wrede: So, I'll just jump right in. We have a broad topic — understanding. What does understanding actually mean? Can it be broken down into smaller concepts to make it easier to grasp? Do you have definitions or possibly different dimensions?
00:03:25: Prof. Dr. Heike Buhl: Yes, we've thought a lot about how to conceptualize and present understanding. First, we define understanding as a product, not a process—meaning that something has been understood. Then, we considered different goals of understanding. In everyday explanations and ultimately in XAI, there are different understanding goals. Sometimes, I want to understand how something works, and sometimes I want to understand the facts.
00:03:56: Prof. Dr. Heike Buhl: Therefore, we distinguish between "knowing how"—the ability to do something, which we call "enabledness"—and "comprehension" or "knowing what," meaning I can fully grasp and conceptually describe something.
00:04:19: Prof. Dr. Britta Wrede: Is one more difficult than the other, or do they stand side by side?
00:04:24: Prof. Dr. Heike Buhl: No, they stand side by side in our conceptualization. To be truly enabled, one also needs comprehension. They are not completely independent; one requires the other to be able to act effectively and shape things.
00:04:51: Prof. Dr. Britta Wrede: Yes.
00:04:53: Prof. Dr. Hendrik Buschmeier: Yes, and that's a good view of understanding as a product. But in explaining, understanding as a process is also essential. When people converse, when someone explains something and the other wants to understand, it happens step by step. Every sentence must be understood. If we look closely, understanding develops over time. We may start with little knowledge and refine our understanding throughout the conversation.
00:05:22: Prof. Dr. Hendrik Buschmeier : This is what we study—how understanding evolves and how people recognize it.
00:05:39: Prof. Dr. Britta Wrede : That’s fascinating because we often think in binary terms: “Did you understand?”—Yes or no? But it seems like there are intermediate stages.
00:05:52: Prof. Dr. Hendrik Buschmeier: Yes, while listening, we already understand parts of what has been said. And explanations are often not just a single sentence but are developed in conversation, especially with complex topics, addressing various aspects. These aspects are interconnected. So, there are dependencies.
00:06:20: Prof. Dr. Hendrik Buschmeier : Something must be understood for the next step to be understood.
00:06:23: Prof. Dr. Britta Wrede : Yes, okay. You just mentioned understanding goals. Are they always the same, or where do they come from? These understanding goals—can it be that someone does not want to understand at all, or that they say, "That's enough for me"? What role do these goals actually play?
00:06:44: Prof. Dr. Heike Buhl: Very often, we only want to understand things superficially. I just want a rough understanding, or I just want to know how to turn on the light. It’s enough for me to press the switch; I don’t want to know how electricity works. That means some explanations don’t need to go deep.
00:07:01: Prof. Dr. Britta Wrede: I am perfectly satisfied when I have a very rough, superficial understanding. But sometimes, to be truly capable of action, I need a deep understanding of the subject. And that is something that must be negotiated in interaction. Someone wants to explain something to me, but I may not want to go too deep, or I may not want an extensive explanation. First, one must recognize what the other person needs.
00:07:27: Prof. Dr. Britta Wrede: So, it is actually quite difficult for the explainer, isn’t it? To figure that out. Can one adapt well to it? What adaptation processes exist? If I, as an explainer, want to adjust to you and your understanding, are there different processes you have identified?
00:07:50: Prof. Dr. Heike Buhl: First, I must develop an idea of what my counterpart already knows, their prior knowledge. This means being able to respond appropriately—that is what I call the partner model. I need an idea of what the other person knows beforehand, but of course, this is dynamic.
00:08:14: Prof. Dr. Heike Buhl: It changes throughout the explanation process. If I assume that someone knows a lot and start with high expectations of prior knowledge, but they don’t actually know that much, I must adapt. So, based on this changing partner model—which is cognitive, a mental representation of the conversation partner—I would then adapt interactively by using different explanation strategies.
00:08:41: Prof. Dr. Britta Wrede: Ah, okay. And what could such explanation strategies be?
00:08:48: Prof. Dr. Heike Buhl : That could include providing more information, repeating the explanation, or asking follow-up questions: "Did you understand this now, or do you need more?" Or ensuring understanding: "Would you like more details?"—so, engaging directly in the interaction.
00:09:05: Prof. Dr. Britta Wrede: Is repeating actually a great strategy? I mean, you’re basically just saying the same thing again.
00:09:12: Prof. Dr. Heike Buhl: It’s not necessarily the best strategy, but it is certainly one. Usually, one repeats with different words. So, one does not just recite it verbatim but rephrases it if the explanation level was too high or too low. One might paraphrase or pick up on what the other person has said.
00:09:29: Prof. Dr. Britta Wrede: Yes, that is important.
00:09:30: Prof. Dr. Heike Buhl: And then respond using their vocabulary.
00:09:32: Prof. Dr. Britta Wrede: People aren’t always that attentive, are they? I mean, that happens to me sometimes. Then I realize I was just thinking about something else—could you repeat that? So there are many spontaneous effects that, I think, naturally occur in a dialogue like this.
00:09:44: Prof. Dr. Hendrik Buschmeier: Exactly. And that’s an interesting point: How does someone explaining something actually realize that it wasn’t understood? We can, of course, look at how the explanation was given. If an explanation is displayed on a screen and can be read, it might be difficult to tell whether the reader understood it or not.
00:10:07: Prof. Dr. Hendrik Buschmeier: But when we’re in a conversation, we constantly give each other signals—Did I understand that? Did I not understand that? Am I looking puzzled at this moment? These are naturally useful indicators that the explainer can use to gauge whether the other person understood. I don’t know if you’d call it "measuring," but there are these comprehension signals—or "understanding displays," as we call them.
00:10:35: Prof. Dr. Hendrik Buschmeier: These provide evidence of whether the explanation worked well or not. And this interactive adjustment can happen in real time. The moment someone furrows their brow, I can already add more information to the sentence I was about to say and adapt immediately.
00:11:02: Prof. Dr. Britta Wrede: Yes, thank you for these initial thoughts and this first overview of your projects and the relevant concepts. I’d now like to dive a bit deeper into the methodology and hear from you—what exactly do you do in these projects? How do you approach them? And maybe even, what have you discovered so far? Is there already a lot to say about that?
00:11:31: Prof. Dr. Britta Wrede: Hendrik, you’ve already analyzed various explanation dialogues. What exactly did you do in the project, and how did you analyze it?
00:11:41: Prof. Dr. Hendrik Buschmeier: Our project is called *Monitoring Understanding*, which means observing comprehension—but with the goal of doing something with that information. So the explainer observes the person receiving the explanation and tries to understand: Was it understood? Was it not understood? And so on. To investigate this, we first look at human-to-human interaction. We placed two people at a table—one person explains a board game to the other. This creates an explanation dialogue with multiple phases.
00:12:24: Prof. Dr. Hendrik Buschmeier: At first, the game isn’t present—it’s not on the table. The person receiving the explanation doesn’t see the game and doesn’t know anything about it. Initially, just the rules or general concepts are explained. Then, in the second phase, the game is placed on the table, and the explanation continues.
00:12:43: Prof. Dr. Hendrik Buschmeier: Suddenly, there’s more context. In a third phase, the participants start playing, and the explanations continue since there are still uncertainties. So we have multiple steps, and we record everything with video cameras from different angles. We also capture audio and then analyze the dialogues—how do they communicate? What signals do they produce?
00:13:06: Prof. Dr. Hendrik Buschmeier: Not just in speech but also multimodally—facial expressions, gestures, nodding, shaking their head. Does an eyebrow raise? Are they looking at each other? Where is their gaze directed? That’s what we examine.
00:13:19: Prof. Dr. Britta Wrede : And you just mentioned that understanding is more of a process. Are there even clear-cut signals? People often think: "Yes, I understood" or "No, I didn’t understand." Is it really that binary, or how does it work in such a process?
00:13:34: Prof. Dr. Hendrik Buschmeier : Well, people don’t always have to make everything explicit in communication. You can often tell when someone has trouble understanding. For example, if I say, "The game piece moves forward," but the other person is looking somewhere else entirely, that’s an indicator that they might not be clear on which game piece I’m referring to.
00:13:56: Prof. Dr. Hendrik Buschmeier: Or I say something, and I see a completely blank expression—that’s not a good sign. We’ve observed this in our data. When we analyzed how to recognize a lack of understanding in the other person using statistical and computer-assisted methods, we found that, for instance, if the person freezes up, that’s a strong sign of misunderstanding.
00:14:27: Prof. Dr. Hendrik Buschmeier: And when understanding occurs, does that just mean a single nod? Not necessarily. The head is actually moving continuously in a dialogue. People nod subtly, maybe even in rhythm with the conversation. But if there’s suddenly less movement, or if all movement stops entirely—
00:14:43: Prof. Dr. Hendrik Buschmeier : That’s an interesting signal.
00:14:47: Prof. Dr. Britta Wrede: Does this also have to do with the fact that people might not like admitting they didn’t understand something? Or maybe they don’t even realize yet that they didn’t understand it?
00:14:53: Prof. Dr. Hendrik Buschmeier: That’s not something we specifically focus on, but of course, would you tell your boss that you didn’t understand something they just explained or not? That means social aspects play a role. What is my role in this dialogue? Am I supposed to understand this, and so on? Another point is, I might think I haven’t understood it yet, but the conversation is continuing, so I’ll probably understand it soon.
00:15:18: Prof. Dr. Hendrik Buschmeier: So, I might hold back my signal of understanding. Or I just nod for now, everything seems fine, and later I have to admit that I didn’t quite get it, and then I might make that explicit.
00:15:30: Prof. Dr. Britta Wrede: It’s quite a complicated process when you think about it. How is it in your project? Heike, in your research, you’re looking at this process of adaptation, which makes things even more complex because first, you have to determine whether the person understood it, and then you have to decide how to respond. How do you even approach that?
00:15:49: Prof. Dr. Heike Buhl: Right now, we’re also focusing on human-human interaction, and we also use game explanations. That’s not a coincidence—we wanted to work with similar and comparable material. We also analyze the explanations on a linguistic level, and from a psychology perspective, as you mentioned earlier, psychologists try to look inside the human mind.
00:16:18: Prof. Dr. Heike Buhl: Of course, we can’t literally look inside, so we ask. Before the explanation, we ask the person explaining—the explainer—what kind of model they have of their partner. These are stereotype-based models: I assume they have an average knowledge of games in general. I assume they have an average level of interest. So, we use stereotypes that form the basis of this partner model.
00:16:43: Prof. Dr. Heike Buhl: Then, after the explanation, we examine this partner model again. We ask about it once more, and what we find is that it generally improves. That means people tend to perceive their conversation partners as more interested afterward than they initially expected, as friendlier, more cooperative, and in a better mood than they had assumed beforehand.
00:17:11: Prof. Dr. Heike Buhl: We try to gain insight into the interaction by using a little trick: video recall. That means we show participants a video recording of their interaction afterward. In advance, we identify specific scenes that seem particularly interesting—moments where we suspect a change occurred in the partner model, where the explainee might have signaled confusion or said, “Yes, I got it.”
00:17:43: Prof. Dr. Heike Buhl : These are situations where we think a shift happened. Then, we revisit these moments and ask again about the partner model. By analyzing a few key sequences, we can observe how the partner model evolves. One thing I find particularly fascinating is that, as expected, people adjust their model of understanding: “I assume they now understand more than before.”
00:18:13: Prof. Dr. Heike Buhl: But the perception of expertise in games varies. For some, it increases, while for others, it decreases. You might start with the assumption that someone knows a lot about games, but then realize, “Oh, they actually don’t know as much as I thought.” So, these changes happen in both directions.
00:18:30: Prof. Dr. Britta Wrede: That’s interesting. I see, Hendrik—
00:18:32: Prof. Dr. Hendrik Buschmeier : Yes, that’s actually funny because there are interesting parallels between our projects, yet we approach them differently, and of course, for good reasons. We also use video recall in our research. But in our case, it’s not the researchers who select the situations to be reviewed. Instead, right after the participants have had their conversation, we show each of them the video and ask them to pause at the moments when they feel their understanding changed—where they didn’t understand something, or where they finally got it.
00:19:11: Prof. Dr. Hendrik Buschmeier: For the explainer, the task is: when do you think the other person understood? What was the key moment? This helps us identify the moments in a dialogue where something meaningful happens regarding comprehension.
00:19:32: Prof. Dr. Britta Wrede: And have you been able to find out how well the explainer actually recognizes when the other person has understood?
00:19:39: Prof. Dr. Hendrik Buschmeier: It works, but not systematically. And of course, we don’t know if participants always mark every relevant moment. But there are instances where both parties mark the same moment—where both say, “Here, they finally understood,” or, “Here, I finally understood.” And sometimes, they don’t match up at all.
00:19:58: Prof. Dr. Britta Wrede: And in your respective projects—since you each focus on opposite perspectives, one from the explainer’s side and the other from the explainee’s—how do you approach that? Hendrik, in your case, for example, annotation isn’t purely manual, even though that’s also crucial. You probably use other methods as well, right?
00:20:23: Prof. Dr. Hendrik Buschmeier: We have 88 dialogues, each of which lasts a long time, so we can’t manually annotate everything. Some things do need to be annotated by hand—marking specific moments in the video—but other aspects are automated. For instance, transcription is automated. We also analyze gestures: how much hand movement is happening? We use computer vision techniques to detect such things automatically.
00:20:52: Prof. Dr. Hendrik Buschmeier : So…
00:20:56: Prof. Dr. Hendrik Buschmeier: Algorithmic methods are used to identify certain patterns in the videos.
00:21:00: Prof. Dr. Britta Wrede: Yes, exactly. Heike, in your project, you probably-- you already mentioned that you first ask people questions. So, it is probably more difficult for you to analyze this automatically, right?
00:21:16: Prof. Dr. Heike Buhl: Uh, yes, we also look at much less data—we are more economical in Project A01 than in Project A02. We focus only on linguistic expressions, not gestures or facial expressions. And that is indeed annotated manually.
00:21:38: Prof. Dr. Heike Buhl: But on a—"superficial" would be the wrong word—but we look specifically at what drives the conversation forward. So, what are the verbal expressions that advance the discussion? And we relate these to the partner models we previously gathered. This means we can examine both lines separately: linguistic behavior, verbal communication, and changes—especially in the interplay between the explainer and the explainee.
00:22:14: Prof. Dr. Heike Buhl: Yes, so what influences one, what influences the other? And then we relate this again to cognitive adaptability and the partner model. So, we integrate these various pieces of information.
00:22:34: Prof. Dr. Britta Wrede: Have you noticed certain strategies, or are there very different strategies? There are people who explain well and others who struggle. Can this be said in general terms, or would you say some people can adapt well to their partner while others cannot?
00:22:51: Prof. Dr. Britta Wrede: Or do you have an impression of that?
00:22:53: Prof. Dr. Heike Buhl: Determining whether someone explains well is a difficult question. It depends on what constitutes a good explanation. We would need another step—we would have to assess how much the explainee actually understood in the end. This means conducting an additional objective test or evaluation at the end.
00:23:17: Prof. Dr. Heike Buhl: We have not reached that point yet, but it is something we aim to explore. What we do know is that explainees tend to be more or less satisfied. We have also conducted role-playing experiments to explore this further. For example, we did something quite mean—what you mentioned earlier about frozen interaction. We instructed explainees, specifically student assistants, not to respond at all. This resulted in very long explanations that eventually hit a wall.
00:23:57: Prof. Dr. Heike Buhl: These explanations became much more rigid overall. This indicates that without the participation of the explainee, the explanation does not work effectively.
00:24:07: Prof. Dr. Britta Wrede: Yes, that makes sense, right? You just mentioned that if someone does not respond, it signals a lack of understanding, correct?
00:24:12: Prof. Dr. Hendrik Buschmeier: Exactly, exactly. And that is a great finding. The feedback from the explainee is crucial, and this is not just specific to explanations—it applies to all conversations. Talking to someone without receiving any feedback is difficult. We, as lecturers, noticed this clearly during the pandemic when students often had their cameras turned off on Zoom.
00:24:39: Prof. Dr. Hendrik Buschmeier : We sat in front of black screens, talking, but receiving no responses. Did anyone understand? Can I continue? This was unclear. There are also interesting findings from dialogue research—for instance, storytelling does not work without feedback. Stories become extremely long, lack a clear climax, and lose structure because the storyteller wonders, "Why isn’t this being understood?"
00:25:10: Prof. Dr. Hendrik Buschmeier: And withholding feedback is truly a difficult task. That makes it an important subject to examine.
00:25:18: Prof. Dr. Britta Wrede: As an explainer—this question is for both of you—can I do something to encourage better reactions from people? I experience this often in lectures; even when students are physically present, sometimes I just see blank faces. Is there something that can be done?
00:25:32: Prof. Dr. Hendrik Buschmeier: A lecture is, of course, a special situation—it is essentially a monologue, and there is an audience. The audience does provide some feedback—maybe someone kindly nods or smiles, while others yawn, and so on. So, there is some response. However, this is entirely different from a situation where we are sitting together, sharing information, explaining, and getting direct feedback.
00:25:55: Prof. Dr. Hendrik Buschmeier: Yes, I would also say that conversation research does not fully understand how dialogue processes work. There are models for this—how people think during a conversation, how they determine whether the other person has understood, and how they remember that. But how does this scale to an audience of 100 people?
00:26:17: Prof. Dr. Hendrik Buschmeier: There would have to be some kind of "average listener model," right? From my own experience, I would say that in lectures, you tend to focus on the one friendly person in the front row who always nods. If they understand, great!
00:26:31: Prof. Dr. Britta Wrede: Yes, okay. But what would you say, Heike? What strategies do explainers use? Do they have specific techniques?
00:26:41: Prof. Dr. Heike Buhl: Many of our participants tend to ensure comprehension explicitly. Some ask bluntly, "Did you understand?" But many also phrase their questions differently—offering additional information and expecting the explainee to continue, like describing an image. Some even quiz their listeners, similar to a classroom setting: "How many pieces do we have? What happens next?" They incorporate a verification phase.
00:27:00: Prof. Dr. Heike Buhl: So, there is a kind of confirmation process involved.
00:27:06: Prof. Dr. Hendrik Buschmeier: That’s probably the teaching students in your study! We had similar participants—some wanted to explain everything first and only ask questions at the end. This might not be the typical situation, but overall, we see a mechanism in "Monitoring Understanding." It is not just about observing what the other person does but actively prompting responses.
00:27:34: Prof. Dr. Hendrik Buschmeier: For example, elicitation processes—questions at the end of an utterance, like "Okay?" This invites a nod or a response. Sometimes, even just raising the pitch at the end of a sentence is a cue for the listener to acknowledge understanding.
00:27:52: Prof. Dr. Hendrik Buschmeier: Eye movement is another factor. People look around during a conversation, but near the end of a thought, they often glance at their conversation partner. Interestingly, the listener is usually engaged at the same moment, completing the gaze. This moment often aligns with a semantically significant point in the conversation.
00:28:15: Prof. Dr. Hendrik Buschmeier: For example, at a meaningful boundary in a sentence. This is an ideal signal—both parties recognize that a conversational segment has ended, they make eye contact, and through this mutual gaze, they confirm that everything is understood.
00:28:38: Prof. Dr. Britta Wrede: Yes, thanks! That might be a good conclusion for this second part and a great transition to the third part. These are very subtle signals, and sometimes they work incredibly well. But the big question is—could a computer, an XAI, also interpret these cues?
00:28:59: Prof. Dr. Britta Wrede: Exactly. That’s what we will explore in the next section.
00:29:09: Prof. Dr. Britta Wrede : Hendrik, so far, you have only looked at human-to-human interactions. Do you think a machine could effectively recognize whether someone has understood something, or at least when they are in the process of understanding?
00:29:26: Prof. Dr. Hendrik Buschmeier: Yes, classification and recognition mechanisms are already quite advanced. Still, it remains a very challenging problem. Facial expressions are subtle, with small variations, and, most importantly, their meaning depends on context. The significance of a facial expression can change based on what the explainer just said.
00:29:53: Prof. Dr. Hendrik Buschmeier: We cannot simply look at a face and determine its meaning in an explanation process. That makes it quite challenging. However, in our project, we have already found measurable influences—for example, signs indicating when understanding has or has not occurred.
00:30:16: Prof. Dr. Britta Wrede: Okay, but this sounds incredibly complex. You just explained the issue from a computational perspective—pattern recognition may be possible despite subtle signals, and we will likely master that. But the real challenge is that the same signals might mean different things depending on the context—not just broad contexts but even within a single dialogue.
00:30:44: Prof. Dr. Hendrik Buschmeier: Exactly. That’s why it must always be considered in the dialogue context. We also need a structured model of what is happening in the dialogue. That’s why we analyze discourse structures—to operationalize these contextual factors.
00:31:00: Prof. Dr. Britta Wrede: What do you mean by discourse structure?
00:31:02: Prof. Dr. Hendrik Buschmeier: For example, is a question being asked? And does a blank facial expression occur in response to a question that was just posed? Or was something just repeated, and a blank facial expression—whatever that may be—appears? If these are things that should already be known, then in that context, the expression might not mean anything.
00:31:26: Prof. Dr. Hendrik Buschmeier: And yes, if we want to train statistical models that also understand these relationships, we need a lot of data. Fortunately, we have gathered a substantial amount in our project, which provides a good foundation. Another interesting question is whether our XAI is always equipped with a camera and what form it actually takes. If gaze signals and natural conversational behavior play a role, we are already visualizing the XAI as a persona.
00:32:03: Prof. Dr. Hendrik Buschmeier: It could be a robot or an agent displayed on a screen. But if we are talking about something like a smart speaker, then other aspects become important. In that case, there is no one perceiving facial expressions. Perhaps humans also produce different signals in such interactions.
00:32:24: Prof. Dr. Hendrik Buschmeier : Or if we interact with a chatbot that explains something to us, then all of these signals might not be as relevant.
00:32:33: Prof. Dr. Britta Wrede: Do you really think they are completely irrelevant?
00:32:36: Prof. Dr. Hendrik Buschmeier: It’s possible, but they might also mean something different in that context. We would need to analyze human-chatbot interactions to determine which aspects are important. Some behaviors may transfer, such as the time it takes for a person to respond.
00:32:57: Prof. Dr. Britta Wrede: Yes, but couldn’t there also be positive effects? You mentioned earlier that people may not always want to admit that they don’t understand something. With a speaker, for example, it might be less intimidating to admit confusion and ask for further explanation.
00:33:16: Prof. Dr. Hendrik Buschmeier: Yes, exactly. You could ask for clarification multiple times without feeling bad about it. Normally, you might hesitate to ask again and again, worrying that you appear uninformed. But with a machine, that social pressure disappears.
00:33:32: Prof. Dr. Britta Wrede: Now, looking at it from another angle—Heike, could machines or computers actually be better explainers than humans? Right now, they probably aren’t, but could that be a possibility?
00:33:48: Prof. Dr. Heike Buhl: We already see some indications of this from other fields, such as tutoring. The ability to repeatedly ask for explanations without feeling embarrassed is a key advantage of intelligent tutoring systems over human tutors. Human tutors are not always great at maintaining an accurate partner model because our cognitive capacity for processing information is limited.
00:34:14: Prof. Dr. Heike Buhl: Explaining itself is a cognitively demanding task. If I also have to keep track of my partner’s understanding simultaneously, that adds another layer of complexity. Experiments have shown that when human explainers are explicitly provided with partner model information, their explanations improve significantly.
00:34:42: Prof. Dr. Heike Buhl : And studies suggest that intelligent tutoring systems can sometimes be as effective as human explainers, thanks to their greater processing capacity and ability to store and recall relevant details. While these systems are still quite limited compared to where we aim to go, I do believe that machines could eventually become very competent explainers.
00:35:07: Prof. Dr. Britta Wrede: That’s fascinating! At the same time, people often fear that machines might surpass us—even being able to “see through us.” There’s a worry about how much they should know.
00:35:21: Prof. Dr. Hendrik Buschmeier: That’s a valid concern.
00:35:24: Prof. Dr. Britta Wrede : Yes, because such knowledge could be misused. That’s an important ethical issue, of course. While your projects don’t specifically address these concerns, they are part of the broader discussion in our research. But what I take away as a positive note is that you both suggest that XAI could, in theory, become better at explaining by improving its ability to recognize non-understanding.
00:35:49: Prof. Dr. Britta Wrede: So, we could imagine that an XAI might one day explain things even more effectively than humans, simply because it can better detect when someone doesn’t understand. Would you agree?
00:36:03: Prof. Dr. Hendrik Buschmeier: Perhaps. Dialogue theories assume that people are highly attentive and adjust their explanations accordingly. But I’m somewhat skeptical—when someone is explaining something, they are often preoccupied with structuring their own thoughts. The explainee’s understanding may not always be their primary focus.
00:36:32: Prof. Dr. Hendrik Buschmeier : So, while we look to human explanations as a model for improving machine explanations, that doesn’t mean humans are always the best explainers. There is great variability. Some humans may always be better than machines, but not every human will necessarily outperform an AI explainer.
00:37:02: Prof. Dr. Britta Wrede: Maybe we’ll have explanation competitions in the future!
00:37:11: Prof. Dr. Heike Buhl: Yes, but on the other hand—
00:37:15: Prof. Dr. Heike Buhl: Machines still have a lot to learn.
00:37:16: Prof. Dr. Heike Buhl : We can already see that when dynamic partner models are included, the explanation process improves significantly. There are still key components that need to be integrated. But I’m not sure how far we should go or whether I’d be comfortable with a machine having a perfect model of me. That would be unsettling.
00:37:39: Prof. Dr. Heike Buhl : Yet, at a basic level, machines could certainly be helpful in explanations.
00:37:50: Prof. Dr. Britta Wrede : Yes, thank you both! There is still so much to discuss, but we’ve reached the end of our episode. Today, we talked about understanding and adapting explanations. Our guests were Heike Buhl from Paderborn University and Hendrik Buschmeier from Bielefeld University. Thank you for listening! If you enjoyed this episode, subscribe to our podcast and stay tuned for more.
00:38:21: Audio Clip : Explaining Explainability. Thank you for listening to this episode. If you have any questions about this topic, feel free to ask them in the comments. We look forward to welcoming you to our next episode.
New comment