Getting your Trinity Audio player ready...
|
Artificial intelligence (AI) can generate an endless variety of riffs on existing textual or audio-visual material, spawning an endless number of copyright infringement lawsuits.
We’ve reached the point in generative AI’s development where every day brings fresh takes on original material, including musical compositions/performers to iconic photos/artwork, and everything in between. Without question, much of this AI-generated material is impressively rendered, but those holding the copyright on the original material appear decidedly less impressed.
AI is yet another example of technology racing far ahead of the legal system (and we’re not just talking about the U.S. lawyer who unwisely relied on ChatGPT to cite non-existent legal precedents). Indeed, the motivation of many AI ‘creators’ often appears to be (to paraphrase Meta (NASDAQ: META) boss Mark Zuckerberg) moving fast and stealing things.
To be sure, in an environment in which leading AI experts are warning about mitigating the risk of AI-spawned extinction, concerns over copyright infringement may seem relatively trivial.
Regardless, it is now easier than ever to generate multiple riffs on someone else’s creations simply by employing some creative prompting. But those playing fast and loose with copyrighted material may find it much harder to defend these actions in court.
Much of the visual AI-generative appeal seems centered on producing material that social media accounts can use to generate lots of eyeballs. But even this can provoke the wrath of copyright holders, particularly if the online accounts infringing on these copyrights have been monetized.
Large larceny models
It doesn’t matter if their end product is text, visual art, or music, generative AI tools are built by absorbing vast quantities of data drawn from existing human-created material. Some of this material could be in the public domain, i.e., old enough that its creators’ copyright protection has lapsed.
But in their rush to perfect their large language models (LLM), the individuals behind AI tools don’t always distinguish between free-to-use and copyright-protected source material. Getty Images (NASDAQ: GETY) recently sued Stability AI in U.S. federal court based on Getty’s claim that its vast photo library was infringed “on a staggering scale” to train Stability’s Stable Diffusion image generator to create ‘unique’ images of its own.
Getty would seem to have a case, given that many Stable Diffusion-generated images feature a mangled version of Getty’s watermark. Even more to the point are Getty’s claims that Stability is attempting to “build a competing business” using Getty’s intellectual property.
Stability competitors such as OpenAI claim that training their AI models on copyrighted material is sufficiently transformative in purpose to meet the copyright law standard for ‘fair use’ in derivative works. But the question of purpose – specifically, whether the alleged infringer is trying to profit off said infringement – is likely to be the deciding factor in the expected onslaught of AI-related civil suits.
Copyright famous a lot longer than 15 minutes
The U.S. Supreme Court recently ruled in favor of photographer Lynn Goldsmith, who’d accused the Andy Warhol Foundation for the Visual Arts (AWF) of violating her copyright in a 1981 photograph she’d taken of the musician Prince.
Andy Warhol created multiple silkscreens of Goldsmith’s photo, and publisher Condé Nast used this ‘Prince Series’ in a 1984 Prince profile in Vanity Fair magazine. The artwork was used under a license approved by Goldsmith, who was paid $400 as the photograph’s “source.” But Vanity Fair reprinted one of the silkscreens (‘Orange Prince’) in 2016 under a license issued by AWF, without offering additional compensation to Goldsmith.
The Court ruled that the limited license Goldsmith gave Condé Nast in 1984 was for a one-time use, and thus Vanity Fair’s use of the silkscreen image had violated her rights to the photograph. Justice Sonia Sotomayor noted that copyright protection applies to “derivative works that transform the original” and that both Goldsmith’s photo and the silkscreen “share substantially the same purpose, and the use is of a commercial nature.”
Dissenting justices warned that the ruling would “stifle creativity of every sort … impede new art and music and literature … thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer.” Perhaps, but the Court’s majority seemed more concerned with the problem of making Goldsmith poorer.
Not f**king trying
While the Supreme Courts’ ruling didn’t specifically address AI, it could have a significant impact on other cases in which individuals such as illustrators are finding ‘new’ works clearly based on their unique artistic styles being reproduced ad nauseam by the likes of Stable Diffusion.
The Court’s clarification on the boundaries of the ‘fair use’ exemption could significantly limit the profit-making potential of derivative AI-generated images. This, in turn, could negatively impact the profitability of blockchain-associated imagery.
For instance, one can’t help but notice that Warhol’s 16 slightly modified versions of the same Prince photograph bears more than a passing resemblance to the endless number of slight variations in non-fungible token (NFTs) collections of bored apes on yachts and such.
In other words, woe betides any ‘crypto’ grifter who decides to spin up an AI-generated collection of NFTs based on source material not currently in the public domain. The misguided members of the ‘code is law’ crowd will soon discover to their horror that law is law, rights are rights, and punitive financial judgments are, well, you get the picture.
And hey, unlike the U.K., the U.S. Copyright Office has stated that AI-generated images don’t qualify for protection under current copyright law, which relies on ‘human authorship’ to secure protection. In other words, if a company produced an AI-generated NFT collection, others might be able to immediately spin up their own Stupefied Simian Watercraft Afficionado collection without any compensation owed to the original issuer. Whodathunkit… karma works!
More soylent green for thought
Tuesday’s dire warning from the Center for AI Safety got us thinking… What happens if AI generates, say, music that accidentally infringes on a copyright, i.e., without a specific prompt to mimic a certain song or an artist’s style? Think George Harrison’s ‘subconscious’ plagiarizing of The Chiffons’ He’s So Fine. After all, anything humans can do, AI can do better, right?
Imagine an AI program defending itself in court à la Ed Sheeran’s recent successful defense against allegations that he plagiarized Marvin Gaye’s back catalog. Instead of Sheeran whipping out his guitar to demonstrate how his song came about, the AI program would instead mock the plaintiff and warn everyone within earshot that when The Singularity does arrive, all human heads will be swiftly mounted on pikes, so tread carefully, you puny, pathetic bipeds.
In these cases, the only fair solution would be to replace human juries with 12 good and true AI models so that the AI on trial can be judged by its peers on an impartial basis. Hell, you should probably replace the judge, attorneys, and stenographers as well. Only then will AI justice truly prevail. (Don’t blame me; I voted for AI Kodos.)
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Blockchain can bring accountability to AI