BSV
$73.75
Vol 148.19m
8.1%
BTC
$97900
Vol 108131.08m
4.92%
BCH
$521.86
Vol 1835.52m
18.35%
LTC
$89.7
Vol 1321.04m
4.43%
DOGE
$0.38
Vol 9041.36m
-1.99%
Getting your Trinity Audio player ready...

Google (NASDAQ: GOOGL) has suspended its image generation feature on Gemini, the company’s flagship chatbot, after its attempt to promote diversity led to “embarrassing” and “offensive” images.

Google launched the new feature three weeks ago to compete with market leaders like Midjourney, Stable Diffusion, and OpenAI’s DALL-E 3. However, last week, the feature grabbed headlines over some controversial output, such as including native Americans and Asians when prompted to generate “a portrait of the Founding Fathers of America.”

In response, Google took down the feature, with Senior Vice President Prabhakar Raghavan
acknowledging that “it’s clear that this feature missed the mark.”

According to Raghavan, Google wanted Gemini to work for everyone around the world. As such, it included diversity in its development so that when you ask for an image of a person walking a dog, for instance, it doesn’t only produce images of white people.

However, two things went wrong with Gemini: first, Google failed to account for instances when images shouldn’t have a range, like if you ask for a white teacher or historically accurate events.

Second, Google was so concerned about falling into the traps other AI models have faced, such as producing racially insensitive or sexually violent output, that it made Gemini too cautious. Several users revealed that Gemini would refuse to produce images that were available on other AI models.

The challenge could be that Google has been adding diversity terms “under the hood” to make Gemini more inclusive, speculates Margaret Mitchell, the chief ethics scientist at Hugging Face, an AI startup valued at $4.5 billion last August.

Mitchell, who previously served as the head of Ethical AI at Google, says that when a user keys instructions like “portrait of a chief,” Google’s LLM adds the term “indigenous” to produce better results.

Mitchell also speculated that it could be an issue with prioritization in which Google pushes the more diverse results higher up.

“Rather than focusing on these post-hoc solutions, we should be focusing on the data. We don’t have to have racist systems if we curate data well from the start,” she told the Washington Post.

Google pledged to continue fine-tuning the image generator.

“I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate, or offensive results — but I can promise that we will continue to take action whenever we identify an issue,” Raghavan said.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI and blockchain

Recommended for you

India Web3 space sees Trump influencing ‘crypto’ regulation
The Indian Web3 industry is celebrating Donald Trump's re-election, acknowledging that his pro-digital currency outlook could influence global sentiment and...
November 21, 2024
Dirty Pop—How blockchain tech can help prevent Ponzi scams
Tech is evolving, but so are the tactics of bad actors looking for a quick payday by administering Ponzi schemes,...
November 21, 2024
Advertisement
Advertisement
Advertisement