Noah Barnaby*
How will generative AI reshape the practice of law? Since the release of ChatGPT in late 2022, this question has been analyzed in countless news articles,1 law journals,2 and blogs3 (including these ones4). Many of these sources highlight the opportunities presented by AI, such as increased research efficiency and the end of associates spending nights in warehouses reviewing documents.5 Others highlight the dangers and pitfalls of AI, especially the danger of hallucinated cases being submitted to the court.6 One under-explored corner of the AI revolution may have an outsized impact not only on the practice of law but on how society views and interprets facts more broadly.
While the dangers of AI-generated false information are already in the spotlight, this false information also creates an equally dangerous shadow. As education about the risks of AI-generated content grows, it becomes easier for bad actors to leverage that skepticism and claim that authentic evidence was created by AI. Robert Chesney and Danielle Citron dubbed this phenomenon the “Liar’s Dividend.”7 This article will explore the implications of the liar’s dividend for the courts and examine the solutions available to mitigate its impact.
Chesney and Citron argue that AI-assisted technologies could create significant challenges to privacy, democracy, and national security through the spread of “deep-fakes”—hyper-realistic images and videos of public figures doing and saying things they neither did nor said.8 The threats this technology creates are significant, including extortion and blackmail of individuals, the manipulation of elections, and “truth decay” brought on by siloed media environments and eroding trust in journalism.9 But the danger of the liar’s dividend is unique: it spreads even when AI is not used directly.10
“Imagine a situation,” they write, “in which an accusation is supported by genuine video or audio evidence. As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deepfakes. Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence.”11
In the courtroom, the liar’s dividend becomes what Rebecca Delfino calls the “deepfake defense.”12 For Delfino, the deepfake defense is uniquely dangerous to the justice system and its fact-finding process because it raises existential questions about reality in ways that other types of scientific evidence do not.13 Deepfakes “invite the viewer . . . to choose their truth based not merely on the evidence presented in the trial but also based on their individual perceptions of truth.”14 As deepfake technology develops beyond what can be easily detected by the human eye, the reliability and efficiency essential to determining the authenticity of evidence is lost.15 “Seeing is believing” no longer.16
A chilling example of the potential impact of the liar’s dividend is explored in an article by Abhishek Dalal, co-authored by District of Minnesota Judge John Tunheim and others.17 In it, they posit a hypothetical U.S. presidential election where one candidate accuses her opponent of spreading AI-generated defamatory material and seeks an injunction against him, presenting recordings of the opponent plotting to use deepfakes to undermine her in the election.18 The opponent, in turn, insists that the material he shared about the candidate was real and it is her evidence against him that is fabricated.19 This scenario presents tremendous evidentiary issues. How the court chooses to resolve them, and whether its resolution is accepted by the wider public, would have implications not only for the election but for the future of the country.
Can We Reduce the Liar’s Dividend?
Are the courts equipped to address the liar’s dividend? Is the answer to dangerous technologies simply new, better technologies? Can we legislate or incentivize our way out of this problem? These answers are yet to be settled.
For Chesney and Citron, the best hope lies in a multi-faceted approach that embraces all these possible solutions. They highlight the potential efficacy of solutions like imposing civil liabilities on platforms that host deepfakes, including through amending the Communications Decency Act to require platforms to take reasonable steps to ensure they are not being used for illegal ends.20 They also emphasize the importance of placing pressure on foreign actors who use deepfakes, such as through the imposition of sanctions.21 Chesney and Citron are skeptical that technology alone would be capable of the scale and reliability necessary to effectively combat the trends created by the spread of deepfakes.22 Ultimately, they question how much this problem can be truly mitigated and call for further creative solutions.23
Delfino also calls for a multi-directional response to the problem of the deepfake defense.24 She argues that the court’s sanction powers and other existing procedural rules are likely inadequate to confront this problem due to hesitancy in enforcing sanctions, high burdens in proving misconduct, and inconsistent application of rules across jurisdictions.25 ABA ethical rules also fall short due to under-enforcement and unclear application in the absence of explicit prohibitions on raising the deepfake defense.26 For Delfino, new rules will be required for the courts to properly address this issue. These include expanding Federal Rule of Civil Procedure 11 to oral arguments and bad-faith actions or tactics—not just written statements—and amending ABA ethical rules to bar attorneys from questioning the authenticity of evidence without a reasonable belief that the evidence is false.27 These changes are necessary, she urges, because the deepfake defense presents an unprecedented challenge to our legal system and democratic institutions.28
Dalal and his coauthors are more optimistic about the capacity of current rules to adapt to the challenge of the liar’s dividend. They too are skeptical of the efficacy of a merely technological solution,29 but argue that existing rules can be adapted using new methodologies to effectively confront this problem.30 This method includes proactive pretrial conferences to allow deepfake allegations to be raised early and allow parties to obtain discovery of relevant evidence.31 They also encourage heightened judicial scrutiny of contentious evidence to determine its probative versus prejudicial value, given the potentially dramatic impact of deepfakes on jurors’ perception.32
One other avenue offers hope for addressing the problem of the liar’s dividend: public education. While fears about AI disinformation expand the liar’s dividend in their shadow, bringing the dividend into the light weakens its power.33 Cautious assessments of the threat posed by AI-generated content like deepfakes, coupled with education about the incentives and risks of claiming real content is AI generated, can diminish the incentives for bad actors to make these claims in the first place.34
*Noah Barnaby, J.D. Candidate, University of St. Thomas School of Law Class of 2027 (Associate Editor).
- See, e.g., Isabelle Bousquette, Double Acquisition Highlights How Legal Industry Is Slowly Embracing AI, Wall St. J. (Aug. 29, 2023), https://www.wsj.com/articles/double-acquisition-highlights-how-legal-industry-is-slowly-embracing-ai-4a1e042c?reflink=desktopwebshare_permalink [https://perma.cc/W9YK-52R7]. ↩︎
- See, e.g., Joely Williamson, The Rise of AI in Legal Practice: Opportunities, Challenges, & Ethical Considerations, Colo. Tech. L.J. (Mar. 21, 2025), https://ctlj.colorado.edu/?p=1297 [https://perma.cc/FA7D-TEMV]. ↩︎
- See generally Artificial Intelligence Archive, Above the Law, https://abovethelaw.com/tag/artificial-intelligence/ [https://perma.cc/KQ6A-DGTK]. ↩︎
- Elizabeth Edinger, Blog, The Use of Artificial Intelligence in Job Recruiting and its Legal Risks, Univ. St. Thomas L.J. (Apr. 26, 2025) https://ustlawjournal.blog/2025/04/26/the-use-of-artificial-intelligence-in-job-recruiting-and-its-legal-risks/ [https://perma.cc/Y72T-3XPA]; Jake Heyer, Blog, Navigating the Future of Law: How AI is Reshaping Legal Practice, Univ. St. Thomas L.J. (Jan. 29, 2025), https://ustlawjournal.blog/2025/01/29/navigating-the-future-of-law-how-ai-is-reshaping-legal-practice/ [https://perma.cc/2FU9-UZ76]; Michael Hurd, Blog, Generative AI is Evolving Quickly, and Artists Want Their Fair Share, Univ. St. Thomas L.J. (Dec. 17, 2024), https://ustlawjournal.blog/2024/12/17/generative-ai-is-evolving-quickly-and-artists-want-their-fair-share/ [https://perma.cc/HH8D-SEKC]; Julia Abreu Siufi, Blog, Do We Know When We See It? The DEFIANCE Act and the Use of AI to Generate Deepfake Images, Univ. St. Thomas L.J. (Apr. 29, 2024), https://ustlawjournal.blog/2024/04/29/do-we-know-when-we-see-it-the-defiance-act-and-the-use-of-ai-to-generate-deepfake-images/ [https://perma.cc/AHQ9-JK2N]; Zekriah Chaudhry, Blog, A Law Student’s Guide to Getting Started with AI, Univ. St. Thomas L.J. (Nov. 1, 2023), https://ustlawjournal.blog/2023/11/01/a-law-students-guide-to-getting-started-with-ai/ [https://perma.cc/KZA2-R9ZL]; Jack Thram, Blog, Artificial Intelligence, Real Musicality: Reconciling Generative AI with Copyright Law, Univ. St. Thomas L.J. (Oct. 10, 2023), https://ustlawjournal.blog/2023/10/10/artificial-intelligence-real-musicality-reconciling-generative-ai-with-copyright-law/ [https://perma.cc/AN9C-PMV2]. ↩︎
- See Nicole Black, 10 more AI use cases for lawyers: Research, writing, and law firm management, 8am (Aug. 19, 2023), https://www.8am.com/blog/ten-more-ai-use-cases-lawyers-research-writing-law-firm-management/?utm_campaign=8am-evergreen-nativecontent-sept25&utm_medium=web&utm_source=abovethelaw [https://perma.cc/PVV2-A468]; The Competitive Advantages AI Brings To Litigation Firms, Above the Law (May 16, 2025), https://abovethelaw.com/2025/05/the-competitive-advantages-ai-brings-to-litigation-firms/ [https://perma.cc/5W44-YAHV]. ↩︎
- See Stephen Embry, Hallucinations Here, Hallucinations There, Hallucinations Everywhere: Why Do Lawyers Keep Doing It?, Above the Law (July 20, 2025), https://abovethelaw.com/2025/07/hallucinations-here-hallucinations-there-hallucinations-everywhere-why-do-lawyers-keep-doing-it/ [https://perma.cc/PWL5-GXVJ]; Varun Magesh et al., AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, Stan. Univ. Human-Centered A.I. (May 23, 2024), https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries [https://perma.cc/CQH5-WW34]. ↩︎
- Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. 1753, 1758 (2019). ↩︎
- Id. at 1753. ↩︎
- Id. at 1776–84. ↩︎
- Id. at 1785. ↩︎
- Id. ↩︎
- Rebecca Delfino, The Deepfake Defense—Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers, 84.5 Ohio St. L.J. 1068 (2024). ↩︎
- Id. at 1081. ↩︎
- Id. ↩︎
- Id. at 1076–77. ↩︎
- Id. ↩︎
- Abhishek Dalal et al., Deepfakes in Court: How Judges Can Proactively Manage Alleged
AI-Generated Material in National Security Cases, U. Chi. Legal F., Vol. 2024, at 25. ↩︎ - Id. at 85–88. ↩︎
- Id. ↩︎
- Chesney & Citron, supra note 7, at 1799–800 (they also highlight risks of this approach, including over-deterrence and creeping censorship). ↩︎
- Id. at 1811–13. ↩︎
- Id. at 1787–88. ↩︎
- Id. at 1819. ↩︎
- Delfino, supra note 12, at 1123–24. ↩︎
- Id. at 1089–103. ↩︎
- Id. at 1112–17. ↩︎
- Id. at 1118–21. ↩︎
- Id. at 1122–23. ↩︎
- Dalal et al., supra note 17, at 83–85. ↩︎
- Id. at 90–98. ↩︎
- Id. ↩︎
- Id. at 79, 90–98. ↩︎
- Josh A. Goldstein & Andrew Lohn, Deepfakes, Elections, and Shrinking the Liar’s Dividend, Brennan Ctr. for Just. (Jan. 23, 2024), https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend [https://perma.cc/H43L-WYHN]. ↩︎
- Id. ↩︎

AI’s Lurking Danger: Deepfakes and the Liar’s Dividend
Leave a comment