Getting your Trinity Audio player ready...
|
Video streaming platform YouTube has hinted at incoming updates to its policy concerning using artificial intelligence (AI) by creators on its platform.
In a blog post, YouTube says creators should clearly label videos containing AI-generated materials in the description before uploads. The company warned that failure to adhere to incoming disclosure rules would result in the video being taken down in its quest to crack down on misinformation.
YouTube says the disclosures are specifically warranted if the video revolves around “sensitive topics,” particularly public health crises, elections, public officers, and global conflicts. While the Google (NASDAQ: GOOGL)-owned streaming giant did not reveal a date for the new rules, content creators are bracing themselves for a launch before the end of 2023.
“Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” according to YouTube. “We’ll work with creators before this rolls out to make sure they understand these new requirements.”
To ensure compliance with the incoming rules, YouTube disclosed that it will roll out new options for creators to indicate the presence of AI-generated content. For viewers, Youtube says it will inform viewers of AI-generated content via a label in the description panel and a “prominent label” if the video pertains to a sensitive topic.
In cases where clear-cut labeling is insufficient to reduce risks of misinformation, YouTube says it will take down the video from its platform in line with its Community Guidelines. Users can also request the removal of videos containing AI-generated content if the content represents an “identifiable individual.”
“Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests,” read the blog post. “This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”
In the incoming updates, YouTube says music partners will be given the option to request the removal of synthetic content mimicking an original artist’s voice. The option will be open to labels and distributors participating in YouTube’s AI music experiments before a mainstream rollout to other labels.
YouTube’s embrace of emerging technology
YouTube has been tinkering with AI to improve user’s viewing experience, teasing new functionalities at the start of November. The video streaming giant is experimenting with a chatbot to offer viewers detailed insights into a video and a content summarization feature for creators.
After beta launching Youtube Create, Dream Screen, and Aloud, the company has confirmed plans to continue its embrace of emerging technologies by using AI for content moderation for “improved speed and accuracy. The company says it will use adversarial testing and threat detection” to prevent bad actors from circumventing the AI rules.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Does AI know what it’s doing?