Getting your Trinity Audio player ready...
|
Google (NASDAQ: GOOGL) has introduced new policy updates to intensify its fight against artificial intelligence (AI)- generated content portraying individuals in explicit contexts without their permission.
In a statement, the tech giant disclosed that it will demote results of explicit deepfakes in Google Search to protect victims from bad actors amid a spike in offensive incidents. Google says the latest tools against deepfakes are an improvement on its existing policies with the most drastic change being the ease of filing complaints.
While victims have always enjoyed the right to request takedowns of non-consensual fake content from Google Search, the latest improvements allow for easy reporting of offensive websites. Google’s statement disclosed that the company will remove duplicates of the derogatory content on the web, building on its experiments with other illicit content.
“These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” read the statement.
The second weapon in Google’s arsenal against deepfakes is an improvement in the Search ranking system. Google believes its decision to build systems to rank quality information at the top of Search may be “the best protection against harmful content.”
Going forward, the search giant unveiled plans to push AI-generated NSFW (not safe for work) content lower on its rankings to stifle its distribution. For searches involving specific names, Google says it will promote high-quality, non-explicit content to drown out the exposure to AI-generated deepfakes.
There are plans to outrightly demote websites that have a slew of reports against them for AI deepfakes, smothering their circulation and distribution from the source.
The combination of the features is poised to reduce incidents by up to 70%, but the company notes that the fight is far from finished. For now, Google continues to grapple with deepfakes that are consensual from those made without the approval of an individual, as search engines are unable to make the distinction.
“These changes are major updates to our protections on Search, but there’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” said Google.
In search for the final solution
Google executives confirm that the new initiatives are not the silver bullets to solve the issue of AI-generated explicit content, arguing that industry-wide partnerships may be the way out. Several tech giants are exploring common strategies to deal with the scourge as they walk the fine line between censoring free speech and protecting users.
Regulators are also stepping into the fray to protect victims following the rollout of new playbooks for AI developers and users. On the other hand, experts argue that integrating AI with blockchain technology may allow search engines to confirm the authenticity of AI-generated content to protect victims.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Transformative AI applications are coming