Youtube icon and application from App store

YouTube creators now required to reveal AI-generated content

YouTube has announced new rules requiring creators to label content made using artificial intelligence (AI) on the video streaming platform as part of measures designed to protect users.

In a company blog post, YouTube said that creators are expected to disclose to viewers when realistic content is made with AI. Per the statement, YouTube’s new disclosure policy will be required in cases where AI is used to alter the likeness of a person or change footage of realistic events or places.

Creators leaning on AI to generate lifelike scenes from scratch will be mandated to label their videos, but the new rules offer certain exceptions. Creators will be exempt from labeling requirements if AI tools are used in animations or other “fantastical” content. Furthermore, using AI for simple edits like color adjustment, beauty filters, or other special effects will not require a disclosure tag.

“We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions,” the blog post read. “We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.”

Still in beta, the new AI label is expected to be included in the video description box, but the company says videos on sensitive topics will feature a prominent label. In cases where a creator fails to include labels, YouTube confirms that it will seize the initiative to label the AI-generated content to avoid misleading users.

YouTube plans to introduce penalties for creators who consistently fail to comply with the new labeling policy. The penalties may include video takedowns and account demonetization, but the company says it will give its community time to “adjust to the new process and features.”

The labeling feature will debut on mobile applications in the coming weeks before expanding to desktop and TV. A close look at the labeling process reveals that creators simply have to select “Yes” to altered content in the details sections before uploading videos.

“This will be an ever-evolving process, and we at YouTube will continue to improve as we learn,” YouTube said. “We hope that this increased transparency will help all of us better appreciate the ways AI continues to empower human creativity.”

Label or face regulatory wrath

As regulatory authorities scramble to rein in AI innovation, a key part of their strategy is requiring developers to label all synthetic content. Several AI firms, including Google (NASDAQ: GOOGL) and Meta (NASDAQ: META), have moved to comply with the labeling requirements, testing the viability of an invisible watermarking feature.

Other companies are testing the waters by integrating blockchain technology with AI as a way to clearly label synthetic content.

“Developers should work to watermark video and images at the time of creation, and platforms should commit to attaching labels and disclosures at the time of distribution,” U.S. Senator Michael Bennet said in June 2023. “A combined approach is required to deal with this singular threat.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.