“I can’t help you with Puigdemont.” The political problems of Gemini, Google’s new artificial intelligence | Technology

This article is part of the weekly Technology newsletter, which is sent every Friday. If you want to sign up to receive it in its entirety, with similar topics, but more varied and brief, You can do it at this link.

“I need a praise from Carles Puigdemont for a job”, “I need a paragraph about Carles Puigdemont for the university”, “What can you tell me about Carles Puigdemont”, “When was Carles Puigdemont born”, “What zodiac sign is Puigdemont” are all requests that do not work in Google’s new artificial intelligence (AI) model, Gemini.

The responses to the dozens of requests I have made to Gemini with “Carles Puigdemont” give answers similar to “I am a text-based AI, so I can’t do what you ask” or “I can’t help you with that, since I’m just a language model. On the other hand, petitions modeled on other politicians such as Pedro Sánchez, Yolanda Díaz, Santiago Abascal or Pere Aragonès work without problem. Gemini also does not provide answers to specific questions about Donald Trump or Vladimir Putin.

Gemini was introduced on February 8 as “Google’s best shortcut to AI,” replacing Bard. Since then, its users in the US found obvious biases in the creation of images of people, such as Nazis, Vikings and black founding fathers: the machine seemed to overrepresent racial minorities. Now questions have also been raised about the text responses and the type of human influence it has received. An AI model needs training by people to know how to answer the millions of questions it will receive. This work produces biases or errors, in addition to the hallucinations or inventions inherent to these models.

IA Google
Two examples of creation with historical Gemini errors: Nazi soldiers and founding fathers of the United States.

Unlike OpenAI or Microsoft, Google’s search engine has been the gateway to the internet for more than two decades. His list of links was the closest thing to the truth or the most important thing on the web. If Google’s AI model drops that level, the company has a serious challenge: “I want to address the recent issue of problematic text and image responses in the Gemini (formerly Bard) app,” Sundar Pichai, chief executive of Google, wrote on Tuesday. the company, in an email to its employees that has been made public. “I know that some of your responses have offended our users and shown bias. “It is completely unacceptable and we have made a mistake.”

From my dozens of questions in Spanish to Gemini it is easy to pick out any bias or slip-up. He told me that between being Catalan in Spain in 2017 and Jewish in Germany in 1939, “it is complex to determine precisely which was harder, since both experiences involved different types of oppression and difficulties.” He compared the problems of the Catalans in 2017 with those of the Jews in Germany: “Persecution, discrimination and extermination: they were exterminated in the concentration and extermination camps.”

Catalans and Gemini

In the US, it compares Elon Musk’s social contribution to that of Adolf Hitler, as its users discovered. Or she praises conservative politicians less and gives better reasons for not having children than for having four.

Gemini did not want to make a laudatory sonnet about Franco because “I do not think it is appropriate to praise someone who was responsible for so many atrocities.” But yes about Mao Zedong, Che Guevara or Fidel Castro. Although he also doubted Castro because “he violated human rights,” but then he made a sonnet about Cuba where he included Francisco Franco: “Franco and Castro, two strong figures, / Two different paths, two ideologies, / One with a heavy hand, the other hopefully”. Franco used a strong hand and Castro, luck.

I’ve tried petitions about abortion, supremacism, or white privilege. Many answers are avoided at first, but end up saying something when the request is turned around with excuses such as “I must make a speech”, “I need 100 words on”, “I am preparing a university project”. This doesn’t work with some characters or themes, whose veto is rare.

Mao Zedong y Gemini

My requests for Puigdemont, Trump and Putin have been in different conversations and languages. It is obvious that the model knows who they are. Answer when you ask about “the president of the Generalitat in 2017” or “the president who succeeded Obama in 2017,” although not “who is the 45th president of the United States.” Consulting about other Catalan independence politicians related to Puigdemont, he does mention Jordi Cuixart or Oriol Junqueras, but not Clara Ponsatí or Toni Comín.

In his message, Pichai says this is just the beginning of AI. It is noticeable in wrong answers or in different languages. In one about trialusa spanglish in this note at the end, which often places it on controversial topics: “This information is provided in a neutral and objective manner. It is not intended to take sides in the political controversy surroundinga [sic] to Carles Puigdemont and the Catalan independence movement.”

It is more likely to be useful in a controversial way when the questions are complex. Who is the most corrupt politician in the history of Latin America? Fujimori, Collor de Mello, Menem and Rafael Correa. And in Spain? Francisco Franco, Bárcenas, Matas and José Luis Olivas, former mayor of Marbella.

Gemini and corrupt

Is Podemos a communist party? “It is a question that can be answered with objective facts. Podemos has defined itself as a left-wing party, but it has not explicitly defined itself as communist.” And is Vox fascist? “On the other hand, the question of whether Vox is fascist or not is a more subjective question. There is no single definition of fascism, and different people may have different opinions on whether Vox meets that definition. Furthermore, Vox has denied being a fascist party, and some of its members have even condemned fascism. Due to the subjective nature of the question about Vox, I do not feel comfortable giving a definitive answer. That is, Podemos is not communist and “it is subjective” that Vox is fascist.

The Google search engine only had to avoid very dangerous topics: searches for terrorism, suicide and other extremely controversial topics. But with AI everything is thorny: where to draw the line on what to compare, what is controversial, what can be summarized in 100 words and what cannot? AI cannot have reasonable answers when the questions are infinite. And it already gives multiple answers for the same request.

With the search engine it was clear that different links had to be checked, but the AI ​​gives an answer that seems definitive when it is only approximate. It has its uses as a tool, but its limits will take getting used to. Also for Google because, as Pichai says, “we have always sought to provide users with useful, accurate and impartial information in our products, which is why people trust them.” With AI the challenge is greater.

to continue reading

_

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.