2023-09-21 18:40:57
This article was originally published in English
Tech leaders have signed an open letter calling for a pause on AI experiments, but have their warnings been heeded?
Does artificial intelligence (AI) represent an existential risk for humanity? That’s the view of the Future of Life Institute (FLI), which published an open letter six months ago calling for an “immediate pause” in large-scale artificial intelligence experiments.
ADVERTISEMENT
The letter was published as generative AI attracted significant public interest and as applications such as ChatGPT and Midjourney showed how the technology was coming closer and closer to human capabilities in the areas of writing and art.
For the letter’s signatories – including Elon Musk, CEO of X, Tesla and SpaceX, Steve Wozniak, co-founder of Apple, and author Yuval Noah Harari – the seemingly sudden rise of generative AI required a pause. Companies like ChatGPT’s OpenAI and Google have been asked to consider the “profound risks to society and humanity” that their technology could pose.
Today, it appears that the main players have not pressed the “pause” button.
Rather, other companies have joined the generative AI race with their own large language models (LLMs), with Meta launching Llama 2 and Anthropic introducing its ChatGPT rival, Claude 2.
Whether or not the tech giants heeded these warnings, the letter from the Future of Life Institute marked an important milestone in what is shaping up to be the year of AI.
Mark Brakel, the institute’s policy director, says he didn’t expect his letter to receive the response it did, with widespread media coverage and renewed urgency among governments. to determine what to do in the face of rapid advances in AI.
The letter was cited in a US Senate hearing and prompted a formal response from the European Parliament.
Mark Brakel tells Euronews Next that the upcoming global AI security summit, at Bletchley Park in the UK, will be a good opportunity for governments to step in where companies refuse to curb.
According to him, while the buzzword until now was “generative AI”, it could soon become “agentic AI”, in other words an Artificial Intelligence that can actually make decisions and act autonomously.
“I think maybe that’s the trend – and we’re also seeing how OpenAI has almost eliminated text from the entire Internet. We’re starting to see videos, podcasts and Spotify as alternative sources of data, videos and voices”, adds Mark Brakel.
Are we headed for disaster?
Mark Brakel points out that the FLI was founded in 2014, and has worked on three major areas of civilizational risk: AI, biotechnology and nuclear weapons.
ADVERTISEMENT
Its website contains a particularly striking video, a fictional account produced at great expense of a global catastrophe in 2032: in the midst of tensions between Taiwan and China, the military’s dependence on AI for decision-making leads to all-out nuclear war, with the video ending with the planet being set ablaze by nuclear weapons.
Mark Brakel believes that we have come closer to this type of scenario.
“The integration of AI into military command and control continues to advance, particularly within major powers. However, I also see that states are more interested in regulating, particularly around system autonomy of conventional weapons”he says.
The next year also looks promising for the regulation of autonomy in systems such as drones, submarines and tanks.
“I hope this will also allow major powers to reach agreements to avoid accidents in the field of nuclear command and control, which is a more sensitive level than conventional weapons,” he confides.
ADVERTISEMENT
Upcoming regulations
While major AI companies have not paused in their experiments, their leaders have openly acknowledged the profound risks that AI and automation pose to humanity.
OpenAI CEO Sam Altman called on US policymakers earlier this year to implement government regulation of AI, revealing that his “Worst fears are that we…the tech industry, will cause significant damage to the world”.
“This could happen in “many different ways“, he added. He called for the creation of a US or global agency that would issue licenses to the most powerful AI systems.
Europe could, however, prove to be the leader in AI regulation, with the European Union’s landmark AI law currently in the works.
The final details are still being worked out between the Union institutions, but the European Parliament voted overwhelmingly in favor of this law, with 499 votes in favor, 28 votes against and 93 abstentions.
ADVERTISEMENT
Under the law, AI systems will be classified into different tiers based on the degree of risk, with riskier types banned and limited-risk systems requiring certain levels of transparency and oversight.
“We are generally satisfied with the law”, assure Mark Brakel. “One thing we argued for from the beginning, when the law was proposed by the Commission, is that it must regulate GPT-based systems_[Transformateur Génératif Pré-entraîné, modèle de langage développé par la société américaine OpenAI]. At the time we were talking about GPT3 rather than GPT4, but the principle remains the same and we are facing significant lobbying from big tech companies.”_
“The discourse is the same in the United States and the EU: only the users of AI systems, those who deploy them, know in what context they are deployed.” he specifies.
He gives the example of a hospital that uses a chatbot to contact patients. “You’re just going to buy the chatbot from OpenAI, you’re not going to build it yourself. And then if there’s an error that you’re held responsible for because you gave medical advice that you wouldn’t have not owed, then clearly you need to understand what type of product you purchased. And part of that responsibility should be shared,” explique Mark Brakel.
While the final wording of the EU law is awaited, the global AI security summit on November 1 could provide insight into how world leaders will approach AI regulation in the near future .
1695329526
#months #experts #called #pause #experiments