Poster Session I

Project Type

Poster

Project Funding and Affiliations

HONR 499 & the UM Minds Lab (Psychology)

Faculty Mentor’s Full Name

Rachel Severson

Faculty Mentor’s Department

Psychology

Abstract / Artist's Statement

As the use of Artificial Intelligence (AI) is increasingly prevalent, both within and beyond academic spheres, an understanding of how people come to trust it becomes vital. With many Large Language Models seeming to use increasingly conversational style, we question to what extent a speaker’s confidence or uncertainty influences listener trust, within the distinct domains of factual questions and moral dilemmas. At present, no research has been published on the subject. Participants were adults (N=128; 18-46 years; 78.7% female) attending the University of Montana. Participation, occurring via SONA online survey and resulting in being awarded class credit or extra credit, involved random assignment to either the factual or moral conditions. Participants viewed videos in which two informants—one confident and one uncertain—gave different claims in response to questions about a pair of animals. A total of eight trials were conducted in each condition, split between human and AI informants. Questions in the factual condition used made-up facts (e.g., “which of these eats blickets?”) to avoid influence of prior knowledge. Questions in the moral condition concerned ethical principles (e.g., “which of these should get to take the medicine?”). Based on the selective social learning paradigm, participants were asked in each trial which answer they endorsed and provided ratings on informant level of confidence, smartness, and likability. Adults were more likely to view the confident human or AI as more credible in both the factual (ps< .001) and moral domains (ps< .04). However, this effect was less pronounced in the moral domain.

Category

Social Sciences

Available for download on Sunday, February 20, 2028

Share

COinS
 
Apr 17th, 10:45 AM Apr 17th, 11:45 AM

Adults more readily trust confident AI and humans for factual questions and moral dilemmas

UC South Ballroom

As the use of Artificial Intelligence (AI) is increasingly prevalent, both within and beyond academic spheres, an understanding of how people come to trust it becomes vital. With many Large Language Models seeming to use increasingly conversational style, we question to what extent a speaker’s confidence or uncertainty influences listener trust, within the distinct domains of factual questions and moral dilemmas. At present, no research has been published on the subject. Participants were adults (N=128; 18-46 years; 78.7% female) attending the University of Montana. Participation, occurring via SONA online survey and resulting in being awarded class credit or extra credit, involved random assignment to either the factual or moral conditions. Participants viewed videos in which two informants—one confident and one uncertain—gave different claims in response to questions about a pair of animals. A total of eight trials were conducted in each condition, split between human and AI informants. Questions in the factual condition used made-up facts (e.g., “which of these eats blickets?”) to avoid influence of prior knowledge. Questions in the moral condition concerned ethical principles (e.g., “which of these should get to take the medicine?”). Based on the selective social learning paradigm, participants were asked in each trial which answer they endorsed and provided ratings on informant level of confidence, smartness, and likability. Adults were more likely to view the confident human or AI as more credible in both the factual (ps< .001) and moral domains (ps< .04). However, this effect was less pronounced in the moral domain.