
Volume
23
Issue
1-2
Abstract
In this study, I explore mathematics teaching-learning dialogues between preservice teachers and a chatbot (ChatGPT) with the aim of increasing our understanding of human mathematics teaching-learning dialogues. The dialogues contain wrong solutions (as determined by the preservice teachers), and the teachers try to get the chatbot to reconsider its answers without giving away the answer. The chatbot’s solution attempts are interpreted through and against the concepts of babbling (imperfect efforts to express thought) and gargling (imitation of surface form of expressions), and the dialogues are analyzed by considering whether and how the preservice teacher and the chatbot reach joint attention. The conversations illustrate that chatbots based on Large Language Models (LLMs) may behave analogously to gargling students and that preservice teachers are tempted to direct the chatbot’s attention to specific important aspects of tasks to get it to reconsider its answers, engaging in a language game of funnelling. Funnelling may work in the sense that the chatbot arrives at an acceptable solution. Still, sometimes, it seems that it is extremely difficult to achieve what looks like human joint attention with the chatbot, hindering progress on a mathematical problem.
First Page
167
Last Page
184
Recommended Citation
Gíslason, Ingólfur
(2026)
"Learning about human mathematical dialogue from dialogue with chatbots: Babbling, gargling and funnelling,"
The Mathematics Enthusiast: Vol. 23
:
No.
1
, Article 9.
DOI: https://doi.org/10.54870/1551-3440.1685
Available at:
https://scholarworks.umt.edu/tme/vol23/iss1/9
Digital Object Identifier (DOI)
10.54870/1551-3440.1685
Publisher
University of Montana, Maureen and Mike Mansfield Library