Document Type

Article

Publication Date

2025

First Page

65

Volume

27

Issue

1

Source Publication Abbreviation

North Carolina Journal of Law & Technology

Abstract

Ethical duties appear poised to be the primary regulatory tool for responsible use of generative AI (“GAI”) by attorneys. This reality necessitates a clear understanding of what the duty of competence requires for attorneys using GAI. Recent state bar and American Bar Association (“ABA”) guidance have coalesced around a foundational concept of informed decision-making, which requires that attorneys have sufficient knowledge about the GAI tool they are using and the specific task at hand to make an informed decision that employing the tool for that task is in the client’s best interests. Competence also requires attorneys avoid automation bias and mitigate against GAI tools’ limitations, including not only hallucinations but also incomplete, inaccurate and misgrounded outputs. Recent ethical guidance requires that attorneys retain cognitive agency when completing tasks that require human judgment and reasoning. These tasks render GAI’s “black box” or lack of explainability, particularly problematic and require attorneys to resist automation complacency, which refers to humans’ reduced capacity to understand or complete the tasks for which they rely on GAI.

Share

COinS