BSV
$45.6
Vol 15.19m
-4.04%
BTC
$67999
Vol 45499.4m
-1.61%
BCH
$330.61
Vol 266.67m
-2.59%
LTC
$66.14
Vol 328.8m
-1.68%
DOGE
$0.16
Vol 3229.16m
6.32%
Getting your Trinity Audio player ready...

Google (NASDAQ: GOOGL) has announced a new update to its search engine optimization (SEO) rules to accommodate content generated by artificial intelligence (AI) tools.

The search engine giant noted that it will be consistently scouring the web for “content created for people” to rank on its platform. Before the latest update, Google focused its efforts on content “written by people,” with analysts describing the shift as a nod to the rise of AI, as elaborated in the update.

In the future, Google will give equal preference to both human and AI-generated content, focusing on whether or not the content is helpful to the website’s visitors. Per the update, human-generated content that fails to offer users a satisfying experience will rank below content generated using AI if it is deemed “helpful.”

Google says it will make use of its “helpful content system” to make the distinction, relying on several signals to reach a decision. The technology giant disclosed the system is AI-based and will have no human input, clarifying that the system’s actions are not spam.

“This classifier process is entirely automated, using a machine-learning model,” read the update. “It works globally across all languages. It is not a manual action nor a spam action. Instead, it’s just one of many signals Google evaluates to rank content.”

Google added that it will update how the classifier detects unhelpful content via regular updates.

There is widespread speculation that Google’s removal of the distinction could spur content creators to rely heavily on AI to generate content. However, experts have warned that a wholesale reliance on AI could pose new risks, including concerns over AI hallucination, leading to errors.

“If you want search engines to send folks your way, you need to provide something that’s not the same as on other sites,” said Google Search Relations team lead John Mueller on Reddit. “By definition [I’m simplifying], if you’re using AI to write your content, it’s going to be rehashed from other sites.”

Previous attempts to distinguish between AI-generated texts and human efforts have faced several obstacles, with OpenAI retiring its AI-classifier over inaccuracies.

Google’s AI march

Not content with the launch of its generative AI platform Bard, Google is reportedly training a new AI model rumored to rival OpenAI’s ChatGPT. The company has since adopted an “ecosystem approach” with AI, integrating its AI models across several units, with Google CEO Sundar Pichai hinting at increased AI investments.

Google says it is not unquestioningly innovating in AI but is committed to ensuring responsible safeguards for AI and other emerging technologies. The company has pledged $20 million to support efforts pushing for safe AI usage while revealing internal policies to label AI-generated content.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Turning AI into ROI – Albert Cuadrante, Roger Collantes, Rafael Fernandez De Mesa

Recommended for you

FINRA: Metaverse to hit $3T by 2031, but poses regulatory risks
FINRA says it has observed more players in the securities industry diving into the metaverse but warns that they must...
November 4, 2024
This Week in AI: US tightens AI restrictions on China
The U.S. issued a rule restricting American investments in China, Hong Kong, and Macau, specifically within industries like AI, semiconductors,...
November 1, 2024
Advertisement
Advertisement
Advertisement