AI vs. Artists: The Bout of the Decade

AI's rapid growth is raising legal concerns, as AI models allegedly use copyrighted materials to train. Lawsuits are questioning AI's legality, as well as its impact on artists and copyrighted content. As the cases are determined, so too will the technology's future and its impact on creators' rights

The 2020s will be looked back upon by historians as the advent of Artificial Intelligence (“AI”). Unlike the origins of AI from the 1950s, we’re now in AI’s golden era, where AI generative software is marketed towards and used by the everyday consumer at massive rates. AI usage continues to grow; however, increased usage leads to increased awareness, which, in turn, leads to increased lawsuits. Today, we see numerous legal claims stemming from AI training methods.

A SparkNotes summary of AI training is developers exposing the software to incredible amounts of data. The developers create vast numbers of datasets that their AI deciphers, learning to recognize and label data, then link specific outputs with desired outputs. As the software improves with pattern recognition, it can begin processing the data independently and continue processing, learning, and distributing information accordingly.

The consistent problem we’re beginning to witness is AI models using copyrighted work to train. AI models absorbing patterns and distributing information with said work means AI can input data from a renowned author and subsequently provide a unique story in nearly identical prose based on the user’s input. That, or an AI model can input art created by an artist and generate an image of whatever a user desires, mimicking techniques and elements unique to that artist.

Urantia Found. v. Maaherra highlights that the Federal Circuit has already begun determining that non-human-created works are not copyrightable. However, courts must now decide if works created by AI using copyrighted materials violate copyright law. These issues are currently being litigated.

In Andersen v. Stability AI, illustrators filed a class-action suit against three image-generative AI platforms: Midjourney Inc’s Midjourney, DeviantArt Inc’s DreamUp, and Stability AI’s Stable Diffusion. The artists allege that these companies’ AI models have been trained using copyrighted images to produce “digital images. . . that are derived exclusively from the [copyrighted work], and that add nothing new.” AI image generators create custom images based on text inputs by users. Users cannot just have AI software create an image of anything. Users must input a specific image style, including the artistic style of the plaintiffs in Andersen and other artists. A concern regarding generating these images is that it will negatively affect the demand for artists who earn income from producing this work. If a consumer enjoys an artist’s work but does not want to pay the artist’s commission to create an image, they could use this generative AI to bypass the artist (and their commission) entirely and obtain the work they desire for little to no cost. Artists are thus deprived of commission, and their pre-existing work is devalued due to increased art with the artists’ styles.

Getty Images, an American media company best known for licensing stock images, photography, video, and music, followed the plaintiffs in Anderson by filing suit against Stability after discovering “over 15,000 photos from its library” in Stability’s Stable Diffusion dataset. Getty’s CEO, Craig Peters, said in an interview with The Verge that Getty Images seeks clarity on the issue rather than damages.

In Tremblay v. OpenAI, OpenAI (the creators of ChatGPT) was accused by several authors of profiting from stolen, copyrighted material. The claim alleged that OpenAI’s large language models, which are trained using massive datasets scraped from the internet, used copyrighted books made available through illegal “shadow libraries.” “Shadow libraries” are online databases offering free access to millions of works that are typically costly or hard to obtain traditionally. AI companies use these libraries to provide massive datasets for their models to learn from, allowing them to output generations that sound like a human wrote them. The plaintiffs presented evidence that ChatGPT can summarize copyrighted books–indicating the language model was trained with those books– and have alleged various copyright infringements, unfair competition, and unjust enrichment.

The companies facing these lawsuits commonly defend that their AI generations align with the Fair Use Doctrine. This doctrine can permit unlicensed use of copyrighted materials depending on factors like if the use was for nonprofit educational purposes, the nature of the copyrighted work, the amount used in the work at issue, and the market impact of using the copyrighted work. However, suppose the plaintiffs’ claims in the mentioned cases are validated. In that sense, it suggests that the copyrighted works played a substantial role in training these AI models and their subsequent outputs, meaning these companies negatively impacted the copyrighted work and profited substantially from it.

These suits have created a time of turmoil for tech companies with an extreme investment in the future of AI. While AI has undoubtedly streamlined the work of individuals and businesses, if it is found that it has done so at the cost of artists, then the AI models could be hamstrung. Companies have already begun to take a stance. Google said earlier this month that it will offer legal protections to customers who are accused of copyright infringement for using Google’s generative AI products, so long as the customer “didn’t try to intentionally create or use [the AI products] to infringe the rights of others,” following a suit against them where eight Google customers alleged that Google has been “stealing everything ever created and shared on the internet” to train it’s Bard AI model. Google follows the lead of companies like Adobe, Microsoft, and IBM, who have offered similar protections for their customers.

Over the Summer, U.S. and European Union leaders began working to restrict the use of these AI models. However, the work of the European Parliament and the U.S. Congress has been unable to keep pace with the developments of generative AI, resulting in many lawsuits.

  There is much at stake for these companies, including the models’ effectiveness, capabilities, and the rights of creators whose work trained them. The verdicts of these lawsuits will do more than settle legal disputes; they will lay down the markers that guide AI's journey into the future.

Awesome Website

Ryan Baker

Ryan is a Junior Staffer for the American University Intellectual Property Brief.

Previous
Previous

It’s Barbie’s World. Burberry’s Just Living in It.

Next
Next

Crossroads Between Free Speech and Trademarks