Poster Session II

Project Type

Poster

Faculty Mentor’s Full Name

Rachel Severson

Faculty Mentor’s Department

Department of Psychology

Abstract / Artist's Statement

Children are increasingly interacting with Artificial Intelligence (AI) to seek information. Yet little is known about how children discern whether to trust AI responses. This study investigates two related questions. First, children prefer to learn from confident over hesitant people, but do they similarly prefer to learn from confident AI? Second, individual differences in children’s attributions of human-like mental states to non-human entities (anthropomorphism) predicts their social conceptions of AI-embedded technologies. Accordingly, are individual differences in anthropomorphism related to their learning from AI?

Child participants (N=64; ages 5-8) viewed videos of AI and human informants answering questions. Across eight trials (four with human informants, four with AI informants), one informant answered confidently and the other answered hesitantly. Children were asked which answer they thought was correct, and rated each informant on their level of confidence, smartness, and likability. Participants also completed the Individual Differences in Anthropomorphism Questionnaire-Child Form (IDAQ-CF).

To address the two research questions, I will analyze (1) children’s learning preferences for the confident or hesitant informants and (2) whether participants’ anthropomorphism scores predict who (or what) they trusted when learning new information. I expect that children with higher anthropomorphism scores will show equal or higher trust in AI as they would in humans when learning new information.

Understanding the link between children’s trust and AI contributes to the real-world challenge of designing safe AI tools for kids. This study offers a new analytic perspective to human vulnerability in trusting AI, and supports more child-aware AI tools.

Category

Social Sciences

Share

COinS
 
Apr 17th, 2:30 PM Apr 17th, 3:30 PM

Mindful Machines: Do Children Trust AI More When They Think It Has a Mind?

UC South Ballroom

Children are increasingly interacting with Artificial Intelligence (AI) to seek information. Yet little is known about how children discern whether to trust AI responses. This study investigates two related questions. First, children prefer to learn from confident over hesitant people, but do they similarly prefer to learn from confident AI? Second, individual differences in children’s attributions of human-like mental states to non-human entities (anthropomorphism) predicts their social conceptions of AI-embedded technologies. Accordingly, are individual differences in anthropomorphism related to their learning from AI?

Child participants (N=64; ages 5-8) viewed videos of AI and human informants answering questions. Across eight trials (four with human informants, four with AI informants), one informant answered confidently and the other answered hesitantly. Children were asked which answer they thought was correct, and rated each informant on their level of confidence, smartness, and likability. Participants also completed the Individual Differences in Anthropomorphism Questionnaire-Child Form (IDAQ-CF).

To address the two research questions, I will analyze (1) children’s learning preferences for the confident or hesitant informants and (2) whether participants’ anthropomorphism scores predict who (or what) they trusted when learning new information. I expect that children with higher anthropomorphism scores will show equal or higher trust in AI as they would in humans when learning new information.

Understanding the link between children’s trust and AI contributes to the real-world challenge of designing safe AI tools for kids. This study offers a new analytic perspective to human vulnerability in trusting AI, and supports more child-aware AI tools.