Do We Know When We See It? The DEFIANCE Act and the Use of AI to Generate Deepfake Images

Julia Abreu Siufi*

On January 24, 2024, the social media platform X, formerly known as Twitter, was flooded with AI-generated sexually explicit pornographic images of Taylor Swift.1 The AI-generated images attracted more than forty-five million views before they were removed by the platform for violating the platform policy.2 X’s safety moderators again reinforced that posting nonconsensual images is not allowed on the platform according to its terms and policies,3 but by then the images had already spread. The incident involving Taylor Swift brought attention to an increasing concern with AI technology: the nonconsensual use of AI platforms and generators to create sexually explicit pornographic images, also known as “deepfakes.”4 

Taylor Swift’s nonconsensual explicit pornographic images originated from a forum called 4chan,5 and Swift is not the only victim who has had her image used in the forum. 4chan users have crafted violent and pornographic nonconsensual images for several public figures using AI technology and “anyone can be targeted in this way, from global celebrities to school children.”6 While Taylor Swift is not the only person to have been affected by the AI technology, this time the controversy sparked legislative action, and on January 30, 2024, senators from the Senate Judiciary Committee introduced the DEFIANCE Act of 2024.7 

The DEFIANCE Act (“Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024”) is a bipartisan bill introduced by Senators Durbin, Graham, and Hawley.8 The bill’s purpose is to improve the rights of the people affected by nonconsensual intimate forged digital images and give them the possibility of seeking civil remedies.9 The bill is a significant step forward in protecting people against deepfakes, as currently only ten states have laws addressing this issue.10 

While the bill is a correct step in the direction of protecting people against the use of their personal image in deepfakes, it raises legal issues and questions that will have to be addressed in the foreseeable future if the bill is enacted, such as what is the true definition and meaning of “identifiable” under the DEFIANCE Act, what remedies are truly available to the victims and whether they are sufficient, and if a review of Section 230 of the Communications Act of 1934 is needed.11

I. IDENTIFIABILITY

    The DEFIANCE Act protects people that are “identifiable” in a deepfake image.12 The bill defines identifiable as an “identifiable individual”13 that “appear to a reasonable person to be indistinguishable from an authentic visual depiction of the individual . . . .”14 This circular reasoning definition of what is an identifiable individual will certainly lead to litigation. Because of the lack of a clear standard, questions of what makes an image identifiable will arise and the courts will have to decide on the issue. 

    According to the current language on the bill, the deepfake must be identifiable for the victim to be able to seek civil remedies. Thus, if a defendant is successful on their claim that the image is not identifiable, then the victims can no longer seek any remedies against the defendant. This is the case even if the images had a harmful impact on the victim’s life, reputation, and were portrayed as if they were the victim’s explicit images. 

    Moreover, what will be sufficient to identify one’s image? Will a caption be sufficient? Will a picture without a face but with the person’s body be sufficient to be identifiable under the DEFIANCE Act of 2024? Or will this be another instance of “I know when I see it”?15   

    II. REMEDIES

      If the DEFIANCE Act of 2024 is enacted it would give victims of deepfake AI technology the opportunity to get civil remedies.16 The people affected would be allowed then to bring civil action against any person that knowingly produced the image, possessed the digital forgery with intent to disclose it, or that knowingly disclosed it.17 Additionally, the victims affected would also be allowed to seek other remedies such as temporary restraining orders, preliminary injunctions, or even a permanent injunction to cease the display or disclosure of the images.18 

      But in the world of the internet, social media platforms and online forums, in which users can go online anonymously, how will the victim know who the offender is? It is possible that victims will have to spend money and resources—which they might not have—to just be able to identify the perpetrator of the deepfakes. Additionally, how will victims be able to seek remedies if the person producing, distributing, or disclosing the images is in another country? The bill does not address any of those questions either. 

      The bill also does not criminalize the conduct—it only allows victims to civilly sue the perpetrators. But what if their perpetrators have no money? Victims will have to bear the cost of civil litigation and may end up not being able to get the remedies they are entitled to. Would a bill criminalizing the conduct then be more efficient?  

      III. SECTION 230 OF THE COMMUNICATIONS ACT

        The DEFIANCE Act limits the victims to seek remedies only from those that produced, possessed with intent to disclose it, or knowingly disclosed the deepfake images.19 It does not give them the possibility to seek out civil remedies from the internet providers where the images are distributed or from the AI generators where the images were created. 

        Internet providers and social media platforms are protected and given civil liability immunity by Section 230 of the Communications Act of 1934.20 This prevents victims from being able to successfully bring actions against the internet providers, social media platforms, and AI generators. Section 230 of the Communications Act was designed to incentivize internet providers to moderate while also insulating them from liability when they, in “good faith,” fail to prevent criminal or civil wrongs from occurring on their platforms.21 Section 230(c) immunity is even named the “Good Samaritan”22 in the statute. Immunity from liability, however, has caused platforms to have no true incentive to moderate the content being spread and by shielding internet providers from being sued, Congress is limiting the victims’ ability to seek out remedies and reparations. 

        In the case of Taylor Swift, in less than twenty-four hours X had suspended the search of her name on the search bar and her fans filled her tag with proper, normal, and day-to-day images of the singer, both tactics which stopped the spread of her deepfake images.23 However, that is not attainable for most people that are victims of deepfake images circulating online—they do not have the power to stop their names from being searched, fans to assist them in spamming the pictures, or enough popularity to cause X, or any other platform, to be aware of their issue in less than twenty-four hours.24 

        With AI technology increasing in popularity, is it time to review the immunity granted to internet providers by Section 230 of the Communications Act? Consequently, if the Act is reviewed and moderation is required of internet providers, will that impact constitutional rights such as freedom of speech? These are all legal questions that may arise—and probably will—if the bill is truly enacted. 

        IV. CONCLUSION

          The bill raises several legal questions and issues that will probably be litigated if the bill is enacted. Nonetheless, this is a positive step forward. Ultimately, the bill is needed with the rise of AI technology, and it is a way to give victims of nonconsensual explicit forged images protection under the law. 


          *Julia Abreu Siufi, J.D. Candidate, University of St. Thomas School of Law Class of 2025, University of St. Thomas Law Journal (Associate Editor).

          1. Jess Weatherbed, Trolls Have Flooded X with Graphic Taylor Swift AI Fakes, The Verge (Jan. 25, 2024, 10:04 AM CST), https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending. ↩︎
          2. Id. ↩︎
          3. Safety (@Safety), Twitter (Jan. 26, 2024, 12:18 AM), https://twitter.com/Safety/status/1750765055380263272?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7CtCtwte%5E1750765055380263272%7Ctwgr%5Eafd66a415cd1ab4a42b46683c9059c748a598469%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theverge.com%2F2024%2F1%2F25%2F24050334%2Fx-twitter-taylor-swift-ai-fake-images-trending. ↩︎
          4. Danielle Keats Citron, The Continued (In)visibility of Cyber Gender Abuse, 133 Yale L.J.F. 333, 342 (2023). ↩︎
          5. Kate Gibson, Fake and Graphic Images of Taylor Swift Started with AI Challenge, CBS News (Feb. 5, 2024, 2:47 PM EST), https://www.cbsnews.com/news/taylor-swift-artificial-intellignence-ai-4chan/.
            ↩︎
          6. Id. ↩︎
          7. Press Release, U.S. Senate Comm. on the Judiciary, Durbin, Graham, Klobuchar, Hawley Introduce DEFIANCE Act to Hold Accountable Those Responsible for the Proliferation of Nonconsensual, Sexually-Explicit “Deepfake” Images and Videos (Jan. 30, 2024), https://www.judiciary.senate.gov/press/releases/durbin-graham-klobuchar-hawley-introduce-defiance-act-to-hold-accountable-those-responsible-for-the-proliferation-of-nonconsensual-sexually-explicit-deepfake-images-and-videos. ↩︎
          8. S. 3696, 118th Cong. (2024). ↩︎
          9. Id. ↩︎
          10. Solcryré Burga, How a New Bill Could Protect Against Deepfakes, Time (Jan. 31, 2024, 4:34 PM EST), https://time.com/6590711/deepfake-protection-federal-bill/. ↩︎
          11. 47 U.S.C. § 230. ↩︎
          12. Press Release, U.S. Senate Comm. on the Judiciary, supra note 7. ↩︎
          13. S. 3696, 118th Cong. (2024). ↩︎
          14. Id. ↩︎
          15. Jacobellis v. State of Ohio, 378 U.S. 184, 197 (1964) (Stewart, J., concurring). ↩︎
          16. S. 3696, 118th Cong. (2024). ↩︎
          17. Id. ↩︎
          18. Id. ↩︎
          19. 47 U.S.C. § 230. ↩︎
          20. Id. ↩︎
          21. Shlomo Klapper, Reading Section 230, 70 Buff. L. Rev. 1237, 1250–52 (2022). ↩︎
          22. 47 U.S.C. § 230(c). ↩︎
          23. WSJ Tech News Briefing, Why X Blocked Searches for Taylor Swift, The Wall Street Journal, at 02:16 (Jan. 30, 2024, 3:01 AM), https://www.wsj.com/podcasts/tech-news-briefing/why-x-blocked-searches-for-taylor-swift/2eb5b803-ffa4-429e-b58f-82fbf69f218e#:~:text=You%20would’ve%20received%20an,evening%2C%20X%20lifted%20the%20ban. ↩︎
          24. Id. at 02:29. ↩︎

          Posted

          in

          by

          Tags:

          Comments

          Leave a comment