Composite image of hammer and gavel with US flag background

US judge sides with generative AI over artists in potentially precedent-setting ruling

A judge has “largely” sided with Midjourney, DeviantArt, and Stability AI in their bid to dismiss a class-action lawsuit filed by three artists accusing the firms and their generative AI programs of various violations, including copyright infringement.

On Monday, California District Court Judge William Orrick ruled in favor of several firms that utilize generative artificial intelligence (AI) algorithms, dismissing most of the claims against them in a copyright infringement class action brought by three artists. The ruling potentially deals a blow to other artists and creators fighting copyright cases against generative AI users and developers.

Citing a lack of evidence, Judge Orrick described the case, filed by artists Sarah Andersen, Kelly Mckernan, and Karla Ortiz in January, as “defective in numerous respects.”

The charges leveled against the AI developers were direct copyright infringement, vicarious copyright infringement, violation of the Digital Millennium copyright act, violation of the statutory right of publicity, violation of the common law right of publicity, unfair competition, and breach of contract violation.

On April 18, all three defendants filed separate motions to dismiss the various charges, with Midjourney and Stability AI also joining a DeviantArt Special Motion to Strike Under the California Code of Civil Procedure.

“Finding that the Complaint is defective in numerous respects, I largely grant defendants’ motions to dismiss and defer the special motion to strike,” said Orrick.

A special motion to strike, often referred to as an “anti-SLAPP” (Strategic Lawsuit Against Public Participation) motion, is used when a defendant in a civil lawsuit believes that the plaintiff’s claims go against their First Amendment rights, such as freedom of speech or the right to petition the government.

Freedom of speech would be relevant if the judge had upheld the plaintiffs’ ‘right to publicity’ claims, which assert that individuals or businesses have a legal right to control and profit from the use of their name, image, or likeness for commercial purposes without their permission.

As this claim was dismissed, “with leave to amend,” the special motion to strike was deferred until the plaintiffs provided more details on their claim.

“The problem for plaintiffs is that nowhere in the Complaint have they provided any facts specific to the three named plaintiffs to plausibly allege that any defendant has used a named plaintiff’s name to advertise, sell, or solicit purchase of… [the] product,” said Orrick. “Plaintiffs need to clarify their right of publicity theories as well as allege plausible facts in support.”

Despite dismissing a large part of the plaintiffs’ case, Judge Orrick did leave the door open for them to amend their claim:

“To provide clarity regarding their theories of how each defendant separately violated their copyrights, removed or altered their copyright management information, or violated their rights of publicity and plausible facts in support.”

Orrick also allowed a copyright infringement claim from one class action member against Stability to go ahead and allowed the class 30 days to attempt to submit an amended suit with more proof.

“Even Stability recognizes that determination of the truth of these allegations — whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run — cannot be resolved at this juncture,” Orrick wrote, in relation to the undecided claim.

The lawsuit was first filed in January and alleged that the companies used the artists’ works without consent or compensation to build the training sets that inform their AI algorithms. This allowed users to generate artworks and images that may be insufficiently transformative from their existing, protected works.

“Defendants are using copies of the training images… to generate digital images and other output that are derived exclusively from the Training Images, and that add nothing new,” claimed the initial filing, which went on to argue that this would “substantially negatively impact the market for the work of plaintiffs and the class.”

Specifically, Andersen and McKernan both claimed that their art has been used in LAION (Large-Scale Artificial Intelligence Open Network) datasets, which were used by Stability to create its Stable Diffusion AI image creation tool, which uses AI to provide computer-synthesized images in response to text prompts.

Importantly for legal precedent, in his October 30 order, Judge Orrick said he was “not convinced” these generative AI images infringed copyright and that it was “simply not plausible” that every training image was copyrighted.

“Even if that clarity is provided and even if plaintiffs narrow their allegations to limit them to Output Images that draw upon Training Images based upon copyrighted images, I am not convinced that copyright claims based a derivative theory can survive absent “substantial similarity” type allegations,” said Orrick.

This suggests that to prove any claim of copyright infringement, the artists will have to amend their case to show definitive similarity between their work and the output of the generative AIs.

As the first major decision in a generative AI copyright infringement case, Judge Orrick’s ruling could have significant implications for several other lawsuits currently being fought in U.S. courts.

Other AI cases

Generative AI is the subject of several ongoing copyright violation claims in the U.S., with creators and artists in various fields taking developers and users to court to protect their IP.

Stability AI, one of the defendants favored by Judge Orrick’s Monday decision, was the subject of another complaint in February from image licensing service Getty, who filed a lawsuit accusing the Stable Diffusion text-to-image AI program of improper use of its photos, violating copyright and trademark rights it has in its watermarked photograph collection.

Getty (NASDAQ: GETYaccused Stability AI of “brazen infringement of Getty Images’ intellectual property on a staggering scale,” claiming Stability has copied more than 12 million photographs from Getty Images’ collection without permission or compensation.

In a different artistic field, a July lawsuit filed by U.S.-based comedian and writer Sarah Silverman, along with two other authors, claimed OpenAI had infringed their copyrights in the training of the firm’s AI algorithm.

“Much of the material in OpenAI’s training datasets… comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation,” said the complaint.

Another similar case was filed recently by the U.S. Authors Guild, who sued OpenAI for allegedly infringing, in the training of ChatGPT, on the copyrights of its members, well-known literary names including John Grisham, George R.R. Martin, Jodi Picoult, and David Baldacci.

According to the plaintiffs’ September 19 filing, OpenAI used existing books to train its AI model without seeking the express consent or permission of the copyright owners, in what the Authors Guild described as “flagrant and harmful infringement.”

“Defendants copied Plaintiffs’ works wholesale, without permission or consideration,” claimed the filing. “These algorithms are at the heart of Defendants’ massive commercial enterprise. And at the heart of these algorithms is systematic theft on a mass scale.”

The Authors Guild also argued that the indiscriminate use of copyrighted material in the training of AI models could put the entire literary sector at risk and potentially impact the earnings of fiction writers, with ChatGPT users able to generate works that imitate established authors, potentially even selling them as original works.

Many of these cases revolve around the concept of ‘fair use,’ which in turn may depend on how similar the AI output is to the various artists’ original works. As of October 30, Judge Orrick has laid down a marker for claims to show “substantial similarity” if they want a chance to prove copyright infringement.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: SEC Commissioner Hester Peirce on Blockchain Policy Matters

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.