Oral Presentations and Performances: Session III

Project Type

Poster

Project Funding and Affiliations

College of Humanities and Sciences

Faculty Mentor’s Full Name

Soazig Le Bihan

Faculty Mentor’s Department

Philosophy

Abstract / Artist's Statement

This paper will examine whether the emergence of Artificial Intelligence generated “hallucinated” legal cases poses a threat to the legal doctrine of stare decisis and the philosophical foundations of legal authority. While traditional debates regarding precedent have centered on whether courts should follow morally flawed or outdated decisions, recent instances of attorneys citing nonexistent cases generated by Artificial Intelligence establishes an unprecedented problem. Using the 2023 case of Mata v. Avianca, Inc., I will argue that hallucinated precedent is not just an issue of professional misconduct; it destabilizes the epistemic and normative conditions that make legal reasoning legitimate.

Artificial Intelligence hallucinations simulate the appearance of authoritative sources while lacking authentic institutional grounding. This simulation erodes the reliability of adjudication, distorts the character of the law, and threatens the legitimacy of the judicial system. By situating contemporary technological developments within classical jurisprudential debates this paper argues that Artificial Intelligence hallucinations represent not only ethical failures but deeper structural challenges to the authority of precedent itself.

Category

Humanities

Share

COinS
 
Apr 17th, 4:00 PM Apr 17th, 4:15 PM

Artifical Intelligence, Hallucinated Precedent, and the Epistemic Foundations of Stare Decisis

UC 330

This paper will examine whether the emergence of Artificial Intelligence generated “hallucinated” legal cases poses a threat to the legal doctrine of stare decisis and the philosophical foundations of legal authority. While traditional debates regarding precedent have centered on whether courts should follow morally flawed or outdated decisions, recent instances of attorneys citing nonexistent cases generated by Artificial Intelligence establishes an unprecedented problem. Using the 2023 case of Mata v. Avianca, Inc., I will argue that hallucinated precedent is not just an issue of professional misconduct; it destabilizes the epistemic and normative conditions that make legal reasoning legitimate.

Artificial Intelligence hallucinations simulate the appearance of authoritative sources while lacking authentic institutional grounding. This simulation erodes the reliability of adjudication, distorts the character of the law, and threatens the legitimacy of the judicial system. By situating contemporary technological developments within classical jurisprudential debates this paper argues that Artificial Intelligence hallucinations represent not only ethical failures but deeper structural challenges to the authority of precedent itself.