BSV
$72.65
Vol 102.28m
0.6%
BTC
$98300
Vol 48124.07m
-0.37%
BCH
$519.65
Vol 1274.36m
-3.45%
LTC
$102.01
Vol 2130.15m
0.75%
DOGE
$0.44
Vol 23127.83m
0.24%
Getting your Trinity Audio player ready...

The United Kingdom government is reportedly working on legislation that would mandate greater transparency in how technology firms train artificial intelligence (AI) models and pool the datasets necessary for generative AI.

U.K. Culture Secretary Lucy Frazer said that the government is working on rules governing AI company’s use of books, music, and TV shows.

“The first step is just to be transparent about what they [AI developers] are using. [Then] there are other issues people are very concerned about,” Frazer told the Financial Times (FT), adding that AI represented a “massive problem not just for journalism, but for the creative industries.”

She went on to suggest that once creators have been informed that their work is being used to train AI models, “there’s questions about opt in and opt out [for content to be used], remuneration. I’m working with industry on all those things.”

Frazer declined to elaborate for the FT on exactly what mechanisms would be needed to deliver greater transparency and allow rights holders to determine whether the content they produced was being used in AI datasets.

Creators’ anger and legal questions

The rapid advancement of generative AI has propelled issues around authorship, infringement, and fair use to the fore. Programs such as OpenAI’s DALL-E and ChatGPT, and Stability AI’s Stable Diffusion are able to generate ‘new’ images, texts, and other content, in response to a user’s textual prompts or inputs.

These generative AI programs, also known as large language models (LLMs), are AI algorithms that use deep learning techniques and huge datasets to generate new content. LLMs are trained to create content partly by exposing themselves to large quantities of existing works such as writings, photos, paintings, and other artworks.

This has drawn the ire of creatives and thrown up two distinct legal issues with regard to copyright and IP: who, if anyone, owns the created work and holds the copyright to content created using these programs, and second, whether that AI-generated work has violated the copyright of one of the creators from whom the AI pooled its dataset.

It’s the latter concern that has prompted a number of creators to sue developers of generative AI and LLMs, including in January 2023, when three visual artists sued multiple generative AI platforms in the U.S., namely Stability AI, Midjourney Inc., and DeviantArt Inc., alleging that the companies used the artists’ works without consent or compensation to build the training sets that inform their AI algorithms.

Another 2023 lawsuit, again involving Stability AI, saw the company accused of “brazen infringement of Getty Images’ intellectual property on a staggering scale” in order to build its AI dataset. Getty claimed Stability had copied more than 12 million photographs from its collection, along with the associated captions and metadata, without permission or compensation.

Creators in the U.K. with similar concerns may be pleased to hear that the country’s Culture Secretary is exploring issues around generative AI, copyright and renumeration, but it’s worth keeping in mind that the pace of legislative progress often lags behind that of technological progress. As evidenced by the current back-and-forth in Parliament over AI copyright and competition oversight.

Parliament wrangles over AI measures

Earlier this month, the U.K. House of Lords—the upper chamber of parliament—called for more government action on AI oversight.

The rallying cry stemmed from a February House of Lords report on generative AI and LLMs, which highlighted the government’s narrowing focus on high‑stakes AI safety at the expense of competition and copyright issues. To which, the Secretary of State for Science, Innovation and Technology, Michelle Donelan, responded by clarifying that the government’s AI oversight is, and would remain, in line with most of the parliamentary recommendations.

However, on May 2, Baroness Stowell of Beeston, chair of the Communications and Digital Select Committee in the House of Lords, came back at the government, asking it to “go beyond its current position” on how it handles copyright infringement and market competition in AI.

It’s likely that such disputes, which walk the tightrope between supporting innovative technology and protecting the rights of consumers and creators, will continue—with increased urgency—as AI models continue to advance.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Improving logistics, finance with AI & blockchain

Recommended for you

Lido DAO members liable for their actions, California judge rules
In a ruling that has sparked outrage among ‘Crypto Bros,’ the California judge said that Andreessen Horowitz and cronies are...
November 22, 2024
How Philippine Web3 startups can overcome adoption hurdles
Key players in the Web3 space were at the Future Proof Tech Summit, sharing their insights on how local startups...
November 22, 2024
Advertisement
Advertisement
Advertisement