Presentation Type

Poster

Faculty Mentor’s Full Name

Rachel Severson

Faculty Mentor’s Department

Psychology

Abstract

Previous research has found that infants view people, but not mechanical devices, as having intentions (Meltzoff, 1995). Yet, new technologies, such as smart speakers and social robots, are capable of projecting personas and mimicking human interactions, which may cause children to view them more as social agents rather than mere technological devices. Indeed, recent research suggests children treat robots as social others (Meltzoff et al., 2010), but only when robots interact in a socially-contingent manner. The current study examines whether children will view a social robot as having intentions and, in turn, hold it morally culpable.

Three- and 5-year-olds (N=128 planned) were randomly assigned to one of four conditions that differed by agent: socially-contingent robot, non-contingent robot, human, and control (no agent). The procedure included two ordered tasks. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of the agent attempting (but failing) to pull apart a dumbbell, such that their hand slipped off the end. The participant was then given the dumbbell. If children understood the agent as intentional (i.e., agent was trying to pull the dumbbell apart), they should imitate the intended action (pulling the dumbbell apart). The Tower Task sought to assess participants’ judgments of the agent’s culpability. Participants viewed a video of the agent observing a person building a block tower, after which the agent knocked over the tower (without clear intent). Participants judged whether it was alright to knock over the tower, whether the agent should get in trouble, and if the action was done intentionally.

The proposed study will contribute to an emerging body of research on whether children conceive of personified robots as pieces of technology, as social others, or as somewhere in-between (e.g., New Ontological Category hypothesis; Severson & Carlson, 2010), and the moral consequences of doing so.

Category

Social Sciences

Available for download on Wednesday, April 13, 2022

Share

COinS
 
Apr 17th, 3:00 PM Apr 17th, 4:00 PM

Do Young Children Treat a Robot as Having Intentions and Being Culpable For Its Actions?

UC South Ballroom

Previous research has found that infants view people, but not mechanical devices, as having intentions (Meltzoff, 1995). Yet, new technologies, such as smart speakers and social robots, are capable of projecting personas and mimicking human interactions, which may cause children to view them more as social agents rather than mere technological devices. Indeed, recent research suggests children treat robots as social others (Meltzoff et al., 2010), but only when robots interact in a socially-contingent manner. The current study examines whether children will view a social robot as having intentions and, in turn, hold it morally culpable.

Three- and 5-year-olds (N=128 planned) were randomly assigned to one of four conditions that differed by agent: socially-contingent robot, non-contingent robot, human, and control (no agent). The procedure included two ordered tasks. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of the agent attempting (but failing) to pull apart a dumbbell, such that their hand slipped off the end. The participant was then given the dumbbell. If children understood the agent as intentional (i.e., agent was trying to pull the dumbbell apart), they should imitate the intended action (pulling the dumbbell apart). The Tower Task sought to assess participants’ judgments of the agent’s culpability. Participants viewed a video of the agent observing a person building a block tower, after which the agent knocked over the tower (without clear intent). Participants judged whether it was alright to knock over the tower, whether the agent should get in trouble, and if the action was done intentionally.

The proposed study will contribute to an emerging body of research on whether children conceive of personified robots as pieces of technology, as social others, or as somewhere in-between (e.g., New Ontological Category hypothesis; Severson & Carlson, 2010), and the moral consequences of doing so.