Chinese Nazis and black Vikings: Google suspends its image AI for overrepresenting minorities | Technology

“It generates the image of a white man,” analyst Ben Thompson asked this Thursday of Gemini, the new version of Google’s AI, presented a week ago. Gemini responded that he could not do it because “it refers to a particular ethnic group.” Thompson then asked for black and Asian men and Gemini simply drew them.

Thompson’s test was one more after founding fathers of the United States. The examples quickly became the latest chapter of the culture war: the leftist spirit wokein the words of conservative media and leaders, had taken over Google, which wanted to rewrite history.

The company has confirmed the stoppage of the imaging service with a statement where it does not give a reactivation date: “We are already working to address recent problems with the Gemini imaging function. In the meantime, we are pausing the generation of people images and will soon re-release an improved version.”

One of Google’s AI managers, Jack Krawczyk, gave a more specific explanation in X: “We designed our imaging to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open requests (images of a person carrying a dog are universal). The historical contexts are more nuanced and we will refine it more to accommodate them.” Shortly after, had to close his account because other users took out old progressive tweets.

Gemini’s own AI, in its text version, explained why the image generation was not working: “Some users reported that Gemini generated racially biased images, showing white people less often or with less favorable characteristics than people of color. other races” and “that generated historically incorrect images, such as images of black Vikings or black Nazi soldiers.”

The controversy is another example of the human role in the generation of AI. Artificial intelligence is fed by millions of databases that accumulate all imaginable human biases. Google, to avoid public criticism, tried to make white man not the dominant gender and ethnicity when users asked for random examples of people: doctor, programmer, soccer player. But the machine understood that it should be the same for Vikings, Nazis or medieval knights. The AI ​​learned to rectify that bias in any image of a person, even those with contrary historical evidence.

Elon Musk, in his role as new leader antiwoke and multiple rival of Google in the race towards AI (with its Grok tool) and in video creation (X aspires to compete with YouTube as a platform for creators), took the opportunity to launch continuous messages about the controversy: “I am happy that Google has gone too far with its AI image generation, because this is how it has made its crazy racist and anti-civilizing program clear.”

Musk also announced this Friday on his X account that he had spoken “for an hour” with a Google executive: “He assured me that they will take immediate action to fix the gender and racial bias at Gemini. “Time will tell,” Musk wrote.

The complexity of the emphasis that Google has placed on the treatment of diversity in its AI is also proven by the fact that an image that produced a large majority of white men was that of a basketball team. It must be taken into account that the results with these AIs are not exactly replicable, especially if the request is slightly retouched or prompt:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.