Presentation Type

Poster Presentation

Category

Social Sciences/Humanities

Abstract/Artist Statement

Culpability for one’s actions arguably hinges on their intentions: A negative outcome is judged more harshly when done purposely versus accidentally (Zelazo, Helwig, & Lau, 1996). However, do children similarly apply this rule to a robot? And is this affected by their propensity to anthropomorphize? To investigate these questions, we tested 3- and 5-year-olds’ inferences of intentions and culpability of two agents (human and robot) and whether their judgments were influenced by their general tendency to anthropomorphize.

Participants (current N=63; 46% female) in two age groups (3 years: n=32, M=3.60 years, SD=.58; 5 years: n=31, M=5.55 years, SD=.33) were randomly assigned to condition: human, robot (socially contingent or non-contingent), or control. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of either a human or robot (socially-contingent or non-contingent) attempting to pull apart a wooden dumbbell (i.e., intended-but-failed action). The participant was then given the dumbbell. If children understood the agent as intentional (i.e., the agent was trying to pull the dumbbell apart), they should complete the intended-but-failed action (pull dumbbell apart). Children who observed the robot or human agent’s intended-but-failed action were significantly more likely to pull the dumbbell apart than controls who did not observe the intended-but-failed action (psp=.55), gender (p=.83), or robot or human conditions (ps>.86).

In the Tower Task, participants viewed a video of the human or robot observing a person building a block tower, after which the human or robot agent knocked over the tower in a manner that could be construed as accidental or intentional. Participants judged the agent’s action in terms of acceptability, punishment, and intentionality (‘on accident’ or ‘on purpose’). ‘Culpability scores’ were calculated as the difference between acceptability and punishment judgments (higher culpability scores indicated lower acceptability and deserving greater punishment). Children who thought the agent intentionally (versus accidentally) knocked over the tower viewed the act as less acceptable (M=1.36 vs. M=1.86, t(59)=2.13, p=.04), more deserving of punishment (M=3.28 vs. M=2.51, t(59)=-2.40, p=.02), and had higher culpability scores (M=1.88 vs. M=0.66, t(57)=2.61, p=.01). Children viewed the human as more culpable than the robot, as evidenced by higher culpability scores (p=.04).

Finally, participants were administered the Individual Differences in Anthropomorphism Questionnaire-Child Form (Severson & Lemm, 2016). Children who scored higher on anthropomorphism viewed the robot, but not human, as more deserving of punishment (r=.51, p=.01) and more culpable (r=.39, p=.01). Anthropomorphism was not linked to inferences of intentionality on the Dumbbell Task.

Taken together, children inferred a robot has intentions to the same degree as a human, and interpretations of intentionality were linked to moral culpability. Yet, children viewed the robot as less culpable than a human. Importantly, children with greater tendencies to anthropomorphize were more likely to view the robot as morally culpable for its actions. These results provide converging evidence that children ascribe mental states to robots, consistent with previous research. In addition, the results provide evidence on how children’s tendencies to anthropomorphize contributes to their judgments about robots’ moral responsibility.

Mentor Name

Rachel Severson

GradCon 2021 .mp4 (412965 kB)

Share

COinS
 

Are robots morally culpable? The role of intentionality and anthropomorphism

Culpability for one’s actions arguably hinges on their intentions: A negative outcome is judged more harshly when done purposely versus accidentally (Zelazo, Helwig, & Lau, 1996). However, do children similarly apply this rule to a robot? And is this affected by their propensity to anthropomorphize? To investigate these questions, we tested 3- and 5-year-olds’ inferences of intentions and culpability of two agents (human and robot) and whether their judgments were influenced by their general tendency to anthropomorphize.

Participants (current N=63; 46% female) in two age groups (3 years: n=32, M=3.60 years, SD=.58; 5 years: n=31, M=5.55 years, SD=.33) were randomly assigned to condition: human, robot (socially contingent or non-contingent), or control. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of either a human or robot (socially-contingent or non-contingent) attempting to pull apart a wooden dumbbell (i.e., intended-but-failed action). The participant was then given the dumbbell. If children understood the agent as intentional (i.e., the agent was trying to pull the dumbbell apart), they should complete the intended-but-failed action (pull dumbbell apart). Children who observed the robot or human agent’s intended-but-failed action were significantly more likely to pull the dumbbell apart than controls who did not observe the intended-but-failed action (psp=.55), gender (p=.83), or robot or human conditions (ps>.86).

In the Tower Task, participants viewed a video of the human or robot observing a person building a block tower, after which the human or robot agent knocked over the tower in a manner that could be construed as accidental or intentional. Participants judged the agent’s action in terms of acceptability, punishment, and intentionality (‘on accident’ or ‘on purpose’). ‘Culpability scores’ were calculated as the difference between acceptability and punishment judgments (higher culpability scores indicated lower acceptability and deserving greater punishment). Children who thought the agent intentionally (versus accidentally) knocked over the tower viewed the act as less acceptable (M=1.36 vs. M=1.86, t(59)=2.13, p=.04), more deserving of punishment (M=3.28 vs. M=2.51, t(59)=-2.40, p=.02), and had higher culpability scores (M=1.88 vs. M=0.66, t(57)=2.61, p=.01). Children viewed the human as more culpable than the robot, as evidenced by higher culpability scores (p=.04).

Finally, participants were administered the Individual Differences in Anthropomorphism Questionnaire-Child Form (Severson & Lemm, 2016). Children who scored higher on anthropomorphism viewed the robot, but not human, as more deserving of punishment (r=.51, p=.01) and more culpable (r=.39, p=.01). Anthropomorphism was not linked to inferences of intentionality on the Dumbbell Task.

Taken together, children inferred a robot has intentions to the same degree as a human, and interpretations of intentionality were linked to moral culpability. Yet, children viewed the robot as less culpable than a human. Importantly, children with greater tendencies to anthropomorphize were more likely to view the robot as morally culpable for its actions. These results provide converging evidence that children ascribe mental states to robots, consistent with previous research. In addition, the results provide evidence on how children’s tendencies to anthropomorphize contributes to their judgments about robots’ moral responsibility.