Google halts Gemini AI image generation over racial accuracy concerns

The AI tool has been creating historically inaccurate images for the last couple of days.

Sejal Sharma
Google halts Gemini AI image generation over racial accuracy concerns
Google said today that it will be putting a pause on the image generation feature in Gemini AI.Google

Google's Gemini AI has recently found itself in a bit of a quandary. The tech titan said today that it will be putting a pause on the image generation feature in Gemini AI, reported Reuters.

The snag comes after the tool was found conjuring up inaccurate depictions of historical figures and some image requests yielding some rather unexpected results.

The pause comes after an apology

Google had also issued an apology yesterday for ‘missing the mark.’

“We're working to improve these kinds of depictions immediately. Gemini's Al image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here,” said the company in a post on X.

When Interesting Engineering asked the AI tool to generate an image, it wrote the following message:

"We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."

It's a classic case of "Houston, we have a problem," as Google navigates the rocky terrain of AI image generation.

In a move aimed at challenging rivals like OpenAI and Microsoft, Google unveiled Gemini. But, sadly, what was intended as a leap forward in combating stereotypes in generative AI has instead sparked controversy.

Social media is full of ‘incorrect’ images that Gemini has generated. In one post, it was found to be depicting US Founding Fathers and Nazi-era German soldiers as people of color.

Some individuals are also perceiving Gemini as biased against white people, alleging that its image generation favors multicultural and racially diverse depictions. 

This controversy has stirred debate online, with right-wing users criticizing Gemini's perceived lack of representation for white individuals. 

The issue gained traction after a former Google employee highlighted difficulties in obtaining images of white people from the AI, reported DailyDot.

In a post on X, a user claimed that Google has an anti-white bias. When he asked Gemini AI to create an image of a black family, the tool did. But when he asked the tool for an image of a white family, Gemini AI said it's unable to generate images that specify ethnicity or race.

The AI needs to make up its mind.

Then, of course, Elon Musk also chimed in to take a dig at Google, categorizing OpenAI and Gemini as racist and woke, in contrast to X which is a ‘maximum truth-seeking’ website.

Musk has in the past accused OpenAI’s ChatGPT to be ‘too woke,’ and has also described wokeness as a threat to ‘modern civilization.’

In another post, he wrote, “Perhaps it is now clear why @xAI’s Grok is so important. It is far from perfect right now, but will improve rapidly. V1.5 releases in 2 weeks. Rigorous pursuit of the truth, without regard to criticism, has never been more essential.”

Despite Google's attempts to address concerns, the debate underscores ongoing tensions surrounding diversity and representation in AI technologies.

Google’s quest to be crowned the AI king

The saga of Gemini's missteps, however, is not unfamiliar territory for Google. Take, for instance, the rebranding of Bard to Gemini. In a mishap, a promotional video inadvertently shared inaccurate information about exoplanetary images.