the great illusion of AI personality

2023-09-29 15:52:06

Meta announced on Wednesday the arrival of artificial intelligence which would be equipped with personalities close to certain celebrities and with whom it will be possible to chat. Presented as a distracting evolution of ChatGPT and others, this anthropomorphism can be dangerous.

For Meta (formerly Facebook), these are “fun” AIs, for others, it could be the first step towards achieving “the most dangerous artifact in human history“, to paraphrase the American philosopher Daniel C. Dennett in his essay against “counterfeits of individuals”.

The social media giant announced, Wednesday September 27, the launch of 28 chatbots (conversational agents) supposed to have their own personalities and designed especially for young people. There will be Victor, a so-called triathlete capable of “motivating you so that you give the best of yourself”, Sally, the “free-spirited friend who will know how to tell you when to take a deep breath”.

Internet users will also be able to chat with Max, an “experienced cook who will give good advice”, or engage in a verbal joust with Luiz, who is not afraid to be “provocative” in his way of speaking.

A chatbot like Paris Hilton

To reinforce the impression of speaking to a specific personality rather than an amalgam of algorithms, Meta has given each of its chatbots a face. Thanks to partnerships with celebrities, these robots resemble the American jet-setter Paris Hilton, the TikTok star Charli D’Amelio or the American-Japanese tennis player Naomi Osaka.

That’s not all. Meta has opened Facebook and Instagram accounts for each of its AIs in order to give them an existence outside of discussion interfaces, and is working to give them a voice from next year. The parent company of Mark Zuckerberg’s empire also began looking for “screenwriters specializing in the creation of characters” in order to refine these “personalities”.

Also readHow a Lovecraft monster became a symbol of the dark side of AI like ChatGPT

Meta may present these 28 chatbots as an innocent enterprise of mass distraction of young Internet users, all these efforts point towards an ambitious project of building AI “as close as possible to humans”, souligne le magazine Rolling Stone.

This race for “counterfeit individuals” worries many observers of recent developments in research on large language models (LLM) such as ChatGPT or Llama 2, its counterpart made in Facebook. Without going as far as Daniel C. Dennett who calls for locking up those who, like Mark Zuckerberg, venture down this path, “there is a section of thinkers who denounce a deliberately misleading approach of these large groups”, assures Ibo van de Poel, professor of ethics and technology at the University of Delft (Netherlands).

“AIs cannot have a personality”

The idea of ​​conversational agents “endowed with personality is in fact literally impossible”, assures this expert. Algorithms are incapable of demonstrating “intention in their actions or ‘free will,’ two characteristics that can be considered intimately linked to the idea of ​​personality,” says Ibo van de Poel.

Meta and the like can, at best, imitate certain constituent characteristics of a personality. “It must be technologically possible, for example, to teach a chatbot to speak like their model,” explains Ibo van de Poel. Thus, Meta’s AI Amber, supposed to resemble Paris Hilton, will perhaps have the same language tics as her human alter ego.

The next step will be to train these LLMs to express the same opinions as their model. A much more complicated behavior to program, because it involves creating a sort of faithful mental picture of all of a person’s opinions. The risk, too, is that these chatbots with personality slip up. One of the conversational agents tested by Meta had very quickly expressed “misogynistic” opinions, learned the Wall Street Journal, which was able to consult internal group documents. Another committed the mortal sin of criticizing Mark Zuckerberg and praising TikTok…

To build these personalities, Meta explains that he set out to equip them with “unique personal stories”. In other words, the creators of these AIs wrote biographies for them in the hope that these robots would deduce a personality. “It’s an interesting approach, but it would have been beneficial to add psychologists to these teams to better understand personality traits,” underlines Anna Strasser, a German philosopher who notably participated in a project to create a great language model capable of philosophizing.

Meta’s anthropomorphism for its AI is easily explained by the lure of profit. “People will surely be willing to pay to be able to speak and have a direct relationship with Paris Hilton or another celebrity,” summarizes Anna Strasser.

The more the user has the impression of communicating with a human being, “the more they will feel comfortable, stay longer and be likely to come back more often”, lists Ibo van de Poel. And in the world of social networks, time – spent on Facebook and its advertisements – is money.

Tool or person?

It’s also no surprise that Meta is launching its quest for “personality” AI with chatbots openly aimed at teenagers. “We know that young people are more likely to fall into anthropomorphism,” assures Anna Strasser.

But for the experts interviewed, Meta is playing a dangerous game by emphasizing the “human characteristics” of their AIs. “I would really have preferred that this group allocated more effort to better explaining the limits of these conversational agents, rather than doing everything to make them appear more human,” regrets Ibo van de Poel.

Also readMusic and artificial intelligence: “the idea of ​​a substitution of the artist is a fantasy”

The emergence of these powerful LLMs has shaken up “the dichotomy between what is in the domain of the tool or object and what is part of the living. These ChatGPT are agents of a third type who come to place themselves between both extremes,” explains Anna Strasser. The human being is still learning how to behave in the face of this UFO, and by making people believe that an AI can have a personality, Meta suggests treating it more like another human being rather than as a tool.

This is dangerous because “Internet users will tend to trust what these AIs will say,” notes Ibo van de Poel. This is not just a theoretical risk: in Belgium, a man ended up committing suicide in March 2023 after talking for six weeks with an AI about the consequences of global warming.

Above all, if everything is done to blur the boundary between the world of AI and that of humans, “this can potentially destroy trust in everything we find online because we will no longer know who wrote what”, fears Anna Strasser . For the philosopher Daniel C. Dennett, it is the door open to the “destruction of our civilization, because the democratic system depends on the informed consent of the governed [ce qui ne peut pas être obtenu si on ne sait plus en quoi et en qui avoir confiance]”, he writes in his essay. So, between chatting with an AI that imitates Paris Hilton and destroying modern civilization is perhaps just a click away?

1696008716
#great #illusion #personality

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.