Featured Article
Article Title
Teaching social robots: the effect of robot mistakes on children’s learning-through-teaching
Authors
Celina K. Bowman-Smith; Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Charlotte Aitken; Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Thuvaraka Mahenthiran; Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Edith Law; Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
Elizabeth S. Nilsen; Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Abstract
Keywords
Summary of Research
“Acquiring new knowledge is fundamental to children’s development. While traditional teaching approaches are effective for the majority of students, many students continue to struggle to meet learning outcomes. For instance, results from an international assessment of achievement indicated that 8% of Canadian students failed to meet the minimum proficiency for mathematics, which is consistent with international statistics. Concerned that the needs of many students are not being met, educators and researchers have explored how technology can be leveraged to create novel, engaging, and impactful learning opportunities. One potential technology is social robots, which have been used with the aim of enhancing children’s learning outcomes across educational contexts and academic subject areas” (p. 2).
“In the present work, we assessed whether the presence and type of mistake behavior demonstrated by a robot tutee impacted 8- to 11-year-old children’s teaching behaviors and their own learning of the content… Children aged 8 to 11 years-old were recruited from a laboratory database of families interested in participating in research, as well as research flyers distributed throughout the community. The initial sample consisted of 124 children from a midsize Canadian city… The final sample consisted of 114 children (45 girls, Mage = 8.77 years, SDage = 0.88 years)” (p. 3-4).
“Addressing our first research question, namely, whether the robot mistake behavior elicited different teaching behaviors from the children, we examined whether the average observations of basic and advanced teaching behaviors differed by robot condition. We found that children who taught a robot that made no mistakes produced fewer teaching behaviors generally than those children who taught a robot that made mistakes (typical or atypical)” (p. 9).
“Looking closer at correlation patterns within conditions, children who engaged in more explanatory or elaborate teaching strategies (i.e., advanced teaching) learned the content for themselves better. This finding is consistent with peer tutoring (Roscoe and Chi, 2008), wherein tutors who provide explanations (vs. just preparing to explain) show a deeper understanding of the material (Fiorella and Mayer, 2013). However, importantly, this was only found when children worked with either a robot that made no errors or with a robot that made typical mistakes. However, importantly, this was only found when children worked with either a robot that made no errors or with a robot that made typical mistakes. That is, children who taught a robot that did not follow a typical” (p.10).
“Given the importance of children’s ability to assess their own knowledge in relation to learning (Fisher, 1998; Kuhn, 2021), our third research question was whether the type of robot mistake behavior affects children’s perception of their own learning and teaching. We did not find that children’s rating of their own teaching or learning varied by condition. Thus, even though, as a group, children demonstrated more teaching behaviors when teaching robots that made mistakes and learned more with an atypical robot, their perceptions did not mirror these effects (in general, children rating themselves as fairly strong in terms of their teaching/learning across all three conditions)” (p. 10).
Translating Research into Practice
Learning-by-Teaching Benefits: Children learned more when they taught a robot, particularly one that made mistakes. This supports using teachable agents (robots, avatars, or characters) in therapy and educational interventions to enhance engagement and knowledge consolidation.
Mistakes Drive Deeper Learning: Robots that made atypical mistakes (getting previously taught info wrong) prompted the most learning. Introducing unexpected errors can increase curiosity and attention—tools that clinicians can use to foster cognitive flexibility and metacognition.
Mismatch in Self-Insight: Children didn’t always recognize when they were learning or teaching effectively. This finding points to the value of helping children reflect on and evaluate their own learning processes, especially those with challenges in self-awareness or confidence.
Support for Social-Cognitive Skills: Children responded differently based on the robot’s accuracy, suggesting they inferred its “knowledge state.” Clinicians could use robot or avatar-based tasks to promote theory of mind and communication adjustments in children with ASD or pragmatic language delays.
Reinforcing Teaching as Intervention: Children who used more advanced teaching strategies showed greater learning gains. Assigning children the role of “teacher” in therapy can empower them, increase motivation, and improve retention of coping skills or academic content.
Error Tolerance and Resilience: Teaching a robot that made mistakes encouraged sustained effort despite error. This can be helpful for building frustration tolerance, persistence, and flexible thinking—especially in kids prone to shutdown when things go wrong.
Other Interesting Tidbits for Researchers and Clinicians
“While this work presents novel findings, there are limitations to consider. First, the interaction between the child and Beta was somewhat limited as per our use of a predetermined script. Future work may use voice detection software and/or large language models to generate a greater variety of responses provided by the social robot, which may also allow for a more qualitative inquiry. Second, because this was a between-subject (vs. within-subject) design, we were not able to assess how children shift their teaching behaviors according to robot tutee behavior (including before/after mistakes), as well as limiting sensitivity in detecting differences in perceptions. Third, our age range was relatively small, and future work with a wider range would be useful to understand the developmental course as to when children show sensitivity to robot mistakes and adjust behaviors accordingly, with a recent meta-analysis on selective teaching suggesting there may be shifts around the age of 4. Finally, we did not conduct a parallel study with human tutees to determine whether the pattern of findings regarding children's learning-by-teaching is comparable with human vs. robot tutees, as well how responses to robot tutee mistakes compare to responses to human tutee mistakes. This was not an initial objective for the study, but such a comparison would be useful to address questions about how educational technologies may offer benefit beyond traditional approaches. Indeed, recent work suggests that each context (i.e., robot tutee vs. child tutee) offers different learning advantages/limitations. As well, conceptually speaking, a comparison with a human tutee would allow for determining whether children hold similar sensitivity/expectations for the “learning” style of robots as they do for humans. It is possible that with human tutees, atypical mistakes create even more of a violation of expectations such that learning is further enhanced. Alternatively, children may experience frustration when teaching a human tutee vs. a robot tutee because the atypical learning pattern. Finally, it would also be useful for future work to examine whether children's individual characteristics play a role in how they respond to different robot behaviors in the context of teaching (e.g., theory of mind” (p. 10).



