OpenAI Forms Superalignment Team to Manage Risks of Superintelligent AI

2023-07-07 15:55:02

News JVTech IA: The creators of ChatGPT play arsonist firefighters and believe that “the power of a superintelligence could be very dangerous”

Published on 07/07/2023 at 17:55

Share :

OpenAI, the company behind the artificial intelligence ChatGPT, has just announced that it has formed a team dedicated to managing the risks linked to the rise of super-intelligent AI. An approach that brings reality closer and closer to science fiction.

Have you ever seen the Terminator movies? In these, an artificial intelligence, Skynet, becomes aware of itself and its ability to turn to its creator : the man. She then hijacks the global networks and causes a veritable nuclear apocalypse on Earth, before creating an army of robots to decimate what remains of human resistance.

For decades, this disaster scenario imagined by James Cameron was pure science fiction. But today, it seems to concern those who are at the very origin of one of the most popular AIs of the moment: OpenAI, the company that created ChatGPT.

Superalignment, the team that wants to master AI

This week, OpenAI announced the creation of Superalignmenta kind of “task force” responsible for “piloting and controlling AI systems much smarter than us”. These systems don’t exist yet, but according to the company’s predictions, it’s only a matter of years before they do: that could happen by the end of the current decade.

“Superintelligence will be the most impactful technology mankind has ever invented and could help us solve many of the world’s most important problems”says the organization. “But the vast power of superintelligence could also be very dangerous and could lead to humanity’s disempowerment or even human extinction. » Skynet, see?

This team of researchers will be co-led by OpenAI Chief Scientist Ilya Sutskever and Research Lab Alignment Lead Jan Leike. Currently, Recruitments are underway to find researchers who will be able to respond to this thorny issue in the years to come.

A worrying finding in a rapidly changing context

“Currently, we don’t have a solution to direct or control potentially super-intelligent AI and prevent it from becoming malicious”, explain the two leaders of the initiative. A finding that is cause for concern, at a time when a real race has been launched by many Tech companies, anxious not to be overtaken in this market with very lucrative potential.

According to Ilya Sutskever and Jan Leike, today it is the moderation teams that oversee the progress of AIs like ChatGPT. “But humans won’t be able to reliably oversee AI systems that are much smarter than us, and so our current alignment techniques won’t be suitable for superintelligence. We need new scientific and technical breakthroughs. »

The objective of this shock team will be to create “an automated alignment finder”modeled on the human levelwhich can be upgraded using “large amounts of computation” pour “iteratively aligning superintelligence”. To reach this goal, OpenAI will allocate 20% of its currently secure computing power to this initiative.

“Essential” regulation according to Sam Altman

OpenAI boss Sam Altman has met with more than 100 U.S. federal lawmakers in recent months to discuss the implementation of regulations around AI. He himself admits that a regulation is “essential”. However, this does not prevent ChatGPT from being regularly singled out for its misleading information, its alleged copyright violations and its ability to respond, in a roundabout way, to certain dangerous user requests.

ChatGPT is not the only artificial intelligence capable of causing problems. Midjourney’s creations are increasingly realistic, and they are regularly used for misinformation purposes. We are still far from superintelligence, and yet AIs already raise many ethical questions. We can, indeed, wonder where all this will lead us in the years to come.

1688774403
#creators #ChatGPT #play #arsonist #firefighters #power #superintelligence #dangerous

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.