BSV
$50.57
Vol 58.32m
2.53%
BTC
$67050
Vol 51509.1m
2.07%
BCH
$355.64
Vol 648.71m
-3.71%
LTC
$70.73
Vol 963.99m
5.65%
DOGE
$0.12
Vol 1636.52m
3.44%
Getting your Trinity Audio player ready...

Ahead of incoming general elections in several nations, Google (NASDAQ: GOOGL) has restricted its artificial intelligence (AI) chatbot, Gemini, from generating outputs bordering on the prediction of poll results.

The restriction was part of the changes in its user policy, as disclosed by Google in a blog post, where it cited an “abundance of caution” as a primary reason for the decision. The tech giant said users will be unable to access certain information linked to major political actors and parties.

The updates have been implemented on Gemini, with the chatbot providing evasive answers when questioned about leading players in the incoming United States general elections. A closer look at the chatbot reveals that questions on the steps for voter registration trigger a similar evasive response.

“I’m still learning how to answer this question. In the meantime, try Google Search,” Gemini remarked when faced with political-related questions.

Google’s latest policy changes come ahead of a string of high-profile global elections expected to take place this year. Residents in the U.S., South Africa, the United Kingdom, and India, will head to the polls, with deepfakes still being a widely contested issue across the board.

Global regulators have raised alarm over the perceived negative impacts of AI on the electoral process, pointing at their capabilities to fuel misinformation and sway voter opinion. Aware of the risks, Google announced preemptive steps, including mandating advertisers to label all AI-generated content in political campaigns clearly.

“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” said Google.

The Big Tech firm has pledged to crack down on the criminal use of its generative AI tools via watermarking features for synthetic images and compliance with local regulations.

Regulators express their discontent with AI

While technology firms like Meta (NASDAQ: META) are tightening the screws for political advertisers on their platforms, global regulators are keen on rolling out stringent rules to guide AI in political spheres.

The U.S. Federal Election Commission (FEC), buoyed by multiple petitions by concerned civil society organizations, is inching toward full guidelines to prevent AI misuse. On the other hand, local legislative houses across the U.S. are following the lead of the FEC to roll out their versions of a regulatory playbook for AI in politics.

Regulators in India, Australia, and the U.K. are making similar moves to rein in the misuse of AI and other emerging technologies, with Indian authorities describing the tools as a “double-edged sword.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

Recommended for you

Mt Gox extended to 2025, 11 years after theft and bankruptcy
In the latest news updates, Mt Gox account holders—who have endured multiple delays and deadline extensions over the years—now have...
October 16, 2024
Hong Kong to issue more VASP permits as South Korea tightens rules
With a new license issued to HKVAX, Hong Kong’s licensed VASPs now rose to three, and the Securities and Futures...
October 16, 2024
Advertisement
Advertisement
Advertisement