ChatGPT talked nonsense for several hours

2024-02-22 05:11:00

Did ChatGPT want to experience madness? The wildly popular generative artificial intelligence (AI) interface that made the technology popular was unlocked for several hours Tuesday, answering users’ questions with nonsense sentences, a reminder that these systems are still in their infancy.

OpenAI, the start-up that launched the program at the end of 2022, indicated Wednesday morning on its site that ChatGPT was working “normally” again.

Erratic or incomprehensible responses

Tuesday afternoon – San Francisco time, where it is based – the Silicon Valley company announced “investigating reports of unexpected responses from ChatGPT”. A few minutes later, she assured that she had “identified the problem” and “was in the process of resolving it”.

Many users have uploaded screenshots showing erratic or incomprehensible responses from the generative AI model. This cutting-edge technology makes it possible to produce all kinds of content (texts, sounds, videos), usually of astonishing quality, upon simple request in everyday language.

“My GPT is haunted”

On the forum for developers who use OpenAI tools, a user called “IYAnepo” noted the “strange” behavior of ChatGPT. “It generates completely nonexistent words, omits words, and produces sequences of small keywords that are unintelligible to me, among other anomalies,” he said. “You would think that I had specified such instructions, but that is not the case. I feel like my GPT is haunted (…).”

Another user, “scott.eskridge”, complained on the same forum that all his conversations with the language model have been “rapidly turning into nonsense for the last three hours.” He copied an excerpt from one of the responses from the interface: “Money for the bit and the list is one of the strangers and the internet where the currency and the person of the cost is one of the friends and the currency. Next time you look at the system, the exchange and the fact, remember to give. »

OpenAI did not provide further details on the nature of the incident, which reminds us that AI, even generative, has no awareness or understanding of what it “says”. Gary Marcus, AI specialist, also hopes that the incident will be seen as a “wake-up call”. “These systems have never been stable. No one has ever been able to build security guarantees around these systems,” he wrote in his newsletter Tuesday. According to him, “the need for completely different technologies, less opaque, more interpretable, easier to maintain and debug – and therefore easier to implement – ​​remains paramount”.

#ChatGPT #talked #nonsense #hours

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.