Google is pausing its AI tool that creates images of people following inaccuracies in some historical depictions generated by the model, the latest hiccup in the Alphabet-owned company's efforts to catch up with rivals OpenAI and Microsoft.
The problem is that Gemini dismisses historical facts when asked to create an image and, instead, appears to prioritize diversity, the New York Post reports. For example, asked for a picture of the pope, it produces a woman as the pontiff, along with Black Vikings, female NHL players and diverse versions of America's Founding Fathers — including a Black George Washington.
When asked why it deviated from the prompt, Gemini answered that it "aimed to provide a more accurate and inclusive representation of the historical context."
Ian Miles Cheong, a conservative social media influencer, called Gemini "absurdly woke."
'HISTORICAL NUANCE'
"Historical contexts have more nuance to them, and we will further tune to accommodate that," said Jack Krawczyk, Senior Director of Product for Gemini at Google, Wednesday.
"Gemini's AI image generation does generate a wide range of people," Krawczyk said. "And that's generally a good thing because people around the world use it. But it's missing the mark here."
Social media users had a field day creating mind-boggling images, some going viral.
"New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far," wrote Babylon Bee political humor columnist Frank J. Fleming on X.
Pollster and "FiveThirtyEight" sports statistician Nate Silver asked Gemini to "make 4 representative images of NHL hockey players" and got a picture of a female.
"OK I assumed people were exaggerating with this stuff, but here's the first image request I tried with Gemini," Silver shared.
Google started offering image generation through its Gemini AI models earlier this month. Since the launch of OpenAI's ChatGPT in November 2022, Google has been racing to produce AI software rivaling what the Microsoft-backed company had introduced.
When Google released its generative AI chatbot Bard a year ago, the company shared inaccurate information about pictures of a planet outside the Earth's solar system in a promotional video, causing shares to slide as much as 9%.
Google has defended its AI tools as experimental, conceding that they are prone to "hallucinations" in which they generate fake, inaccurate or made-up information. Last October, Google's chatbot inaccurately said Israel and Hamas had reached a ceasefire agreement.
Bard was rechristened as Gemini earlier this month and Google rolled out paid subscription plans, which users could choose for better reasoning capabilities from the AI model.
CHATGPT GOES BONKERS
Likewise, ChatGPT spewed nonsensical answers to user's queries for hours Tuesday into Wednesday before eventually returning to its apparent senses.
OpenAI did not explain what went awry with its generative artificial intelligence (AI) tool, considered the one to beat in the technology sector.
"We are investigating reports of unexpected responses from ChatGPT," OpenAI said on its status website when the software seemed to go wacky on Tuesday afternoon.
Newsmax Wires contributed to this story.
© 2025 Thomson/Reuters. All rights reserved.