Google’s Gemini AI Bans Pics Of Humans, Somehow Sees Clowns As Another Species
By Mikelle Leow, 27 Feb 2024
Photo 308959943 © Vgattita | Dreamstime.com
Google’s new Gemini conversational model, once known as Bard, impresses with its speed, understanding of complex information, and ability to generate images. Alas, the initial excitement around the tool quickly turned into concern when it started spitting out racially inaccurate images, prompting the tech giant to halt creations of pictures with humans in them. The latest act in this digital circus sees the tool seemingly struggling to recognize clowns as people. Stephen King might have something to say about this.
Expressing regret over the chatbot’s portrayal of historical characters, where historically white figures like “founding fathers” and even “Nazis” were depicted as people of color, Prabhakar Raghavan, Google’s senior vice president, said the tool “missed the mark” and that the team was suspending the ability to create images featuring people until it could work something out.
We should be optimistic about what a comprehensive L the Gemini controversy was for Google.
— Keith Woods (@KeithWoodsYT) February 24, 2024
I'm not sure that ten years ago there would have been such a backlash over something anti-White like this. Despite still being the majority, Whites undergo this kind of humiliation all… pic.twitter.com/wgtoJ6UXa0
But it seems there are still a bunch of makeup-donning rejects floating about this AI space and juggling its inconsistencies. As discovered by Futurism, Gemini continues to churn out images of clowns, who, apparently, are not flesh and blood.
Gemini won’t obey if you instruct it to dream up artworks of a single clown. However, it can imagine a gaggle of clowns in bizarre settings like submarines or spaceships, Futurism’s Noor Al-Sibai finds out.
When Google launched it was genuinely an order of magnitude better than Altavista and Yahoo et al.
— Sanjay Mehta ð®ð³ (@sanjaymehta) February 27, 2024
Now it’s just garbage. #DeleteGoogle https://t.co/jHHOnk8uUS
Sneaky workarounds like “little guy” work “bizarrely well” too, with the tool generating images of strange, not-quite-humans.
Eventually, the chatbot wisened up to the goofy loopholes and stopped clowning around, says the author.
[via Futurism, images via various sources]