Google is swiftly addressing concerns surrounding its new AI-powered image creation tool after facing accusations of over-correcting to avoid potential racism.
In a published report, users reported that Google’s Gemini bot was supplying images depicting a range of genders and ethnicities, even when historically inaccurate. For instance, requests for images of America’s founding fathers yielded results showing women and people of colour.
Acknowledging the issue, Jack Krawczyk, senior director for Gemini Experiences at Google, stated, “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.” He assured that improvements were underway.
In response to the criticism, Google announced it would temporarily suspend the tool’s ability to generate images of people while working on resolving the issue.
This incident underscores broader challenges in AI regarding diversity and representation. Notably, Google faced backlash nearly a decade ago when its photos app erroneously labeled a photo of a black couple as “gorillas.”
Similarly, rival AI firm OpenAI encountered accusations of perpetuating harmful stereotypes, with its Dall-E image generator often producing results dominated by images of white men in response to queries for “chief executive,” among others.
With pressure mounting to demonstrate progress in AI development, Google recently launched the latest version of Gemini, which creates images in response to written prompts. However, its debut was met with criticism, with some accusing the company of prioritizing political correctness over accuracy.