BSV
$73.41
Vol 81.88m
8.03%
BTC
$98575
Vol 79368.68m
-0.58%
BCH
$530.16
Vol 1422.88m
7.37%
LTC
$101.36
Vol 2235.87m
13.1%
DOGE
$0.45
Vol 23447.76m
16.9%
Getting your Trinity Audio player ready...

Ahead of incoming general elections in several nations, Google (NASDAQ: GOOGL) has restricted its artificial intelligence (AI) chatbot, Gemini, from generating outputs bordering on the prediction of poll results.

The restriction was part of the changes in its user policy, as disclosed by Google in a blog post, where it cited an “abundance of caution” as a primary reason for the decision. The tech giant said users will be unable to access certain information linked to major political actors and parties.

The updates have been implemented on Gemini, with the chatbot providing evasive answers when questioned about leading players in the incoming United States general elections. A closer look at the chatbot reveals that questions on the steps for voter registration trigger a similar evasive response.

“I’m still learning how to answer this question. In the meantime, try Google Search,” Gemini remarked when faced with political-related questions.

Google’s latest policy changes come ahead of a string of high-profile global elections expected to take place this year. Residents in the U.S., South Africa, the United Kingdom, and India, will head to the polls, with deepfakes still being a widely contested issue across the board.

Global regulators have raised alarm over the perceived negative impacts of AI on the electoral process, pointing at their capabilities to fuel misinformation and sway voter opinion. Aware of the risks, Google announced preemptive steps, including mandating advertisers to label all AI-generated content in political campaigns clearly.

“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” said Google.

The Big Tech firm has pledged to crack down on the criminal use of its generative AI tools via watermarking features for synthetic images and compliance with local regulations.

Regulators express their discontent with AI

While technology firms like Meta (NASDAQ: META) are tightening the screws for political advertisers on their platforms, global regulators are keen on rolling out stringent rules to guide AI in political spheres.

The U.S. Federal Election Commission (FEC), buoyed by multiple petitions by concerned civil society organizations, is inching toward full guidelines to prevent AI misuse. On the other hand, local legislative houses across the U.S. are following the lead of the FEC to roll out their versions of a regulatory playbook for AI in politics.

Regulators in India, Australia, and the U.K. are making similar moves to rein in the misuse of AI and other emerging technologies, with Indian authorities describing the tools as a “double-edged sword.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

Recommended for you

Lido DAO members liable for their actions, California judge rules
In a ruling that has sparked outrage among ‘Crypto Bros,’ the California judge said that Andreessen Horowitz and cronies are...
November 22, 2024
How Philippine Web3 startups can overcome adoption hurdles
Key players in the Web3 space were at the Future Proof Tech Summit, sharing their insights on how local startups...
November 22, 2024
Advertisement
Advertisement
Advertisement