Generative AI is Evolving Quickly, and Artists Want Their Fair Share

*Michael Hurd

“There ain’t no such thing as a free lunch.” And soon, generative AI companies may have to pay up. The law surrounding AI is evolving rapidly to keep up with new AI technology. In the early days of generative AI, the technology was used by most people to predict text.1 While a user was typing, the AI model would look back at what a user had typed before and used that information to guess what the user might type next.2 Anyone who used a smartphone when predictive text first came out knows that the technology was hardly intelligent. In the smartphone’s defense, predictive text was working with a very small database of information to build its model. That all changed when AI companies like ChatGPT began building their AI models to pull data from millions of sources all over the internet.3

Modern generative AI models work by scanning the internet for content to train itself.4 When prompted by users, the software searches for relevant content and produces an output containing elements from a number of sources.5 The problem is that AI companies don’t own the content their software is using to train itself on, and so far, these companies have not attempted to license that content.6 This has led to artists from a variety of different disciplines bringing lawsuits against AI companies.7 AI companies have fired back claiming that their use of the artists’ content is fair use as a defense to copyright infringement.8 Luckily for the aggrieved artists, the United States Supreme Court recently narrowed the applicability of the fair use doctrine in Andy Warhol v. Goldsmith in 2023.9 Warhol interpreted the first element of fair use—the purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes—to be understood objectively. The Supreme Court quoted from the Court of Appeals: “[W]hether a work is transformative cannot turn merely on the stated or perceived intent of the artist or the meaning or impression that a critic—or for that matter, a judge—draws from the work.”10 Warhol does not involve AI, but its precedent is likely to have an impact on the onslaught of AI cases brought by artists. Broadly, these lawsuits allege the same thing: AI companies are using the artists’ work to train their models. The details of these lawsuits, however, change with the form of art being generated by the models.

A recent popular use for AI is using the technology to create images. Users can type a prompt into a search bar, and the AI model will combine elements of the request into a new, one-of-a-kind photo. For example, I can type “Supreme Court Justices as a Van Gogh painting” into the software search bar, and the software will generate a painting of Supreme Court Justices in the same style as a Vincent Van Gogh painting. Generative AI does this by scanning through libraries of images from the internet and training their models using the images it finds relating to the Supreme Court and Vincent Van Gogh.11 However, Van Gogh is not the only artist to have their artwork used by an AI company.

Sarah Andersen is an artist bringing a class action, along with several other artists, against Stability AI alleging copyright infringement, Digital Millennium Copyright Act (DMCA) violations, false endorsement, and trade dress claims.12 As Van Gogh has a clearly distinctive style, so does Ms. Andersen when she draws her cartoons. Stability AI’s models use her art to create new images. A major question of the dispute is exactly how Stability AI models work. Andersen’s complaint alleges that Stability AI has copied over five billion images to train their software.13 The complaint also makes similar allegations against other generative AI companies claiming that the output created by these models competes with the original work of the artists the models draw from.14 However, some are skeptical of these allegations. Defenders of generative AI claim that what training models actually do is analyze discrete elements of a work to build a new image.15 Defenders contend that copyright is intended to cover only the original expression of an author. Consequently, copyright law does not protect certain elements and underlying subjects depicted in the protected works.16 For example, just because an artist draws a cat, that does not give that artist an exclusive right to the idea of a cat.17 In the end, the court will have to decide what exactly Stability AI is using to build its models, how transformative the AI generated images are, and the degree to which the AI output competes in the market against the original artwork. The lawsuits don’t end with image generation either; writers, musicians, and journalists, just to name a few, have initiated lawsuits against various AI companies. Most of the initial complaints start with the claim that AI companies are using artists’ works to train their models.18

Looking forward, it’s unclear what the AI legal landscape will look like. Many of the cases brought against AI companies have been filed within the last couple years, and these cases don’t move quickly. Between gaining class certification, extensive discovery, and briefing, resolution to these cases may be several years out. In a quickly changing area of law such as AI, it’s not clear what the precedential value of the opinions of these cases will be in any event. The only certainty at the moment is change, and lawyers and law students should be ready to keep up.


*Michael Hurd, J.D. Candidate, University of St. Thomas School of Law Class of 2026 (Associate Editor).


Posted

in

by

Tags:

Comments

Leave a comment