5 experts and scholars speak to Asharq Al-Awsat
Friday – 9 Ramadan 1444 A.H. – March 31, 2023 A.D. Issue Number 
Cairo: Hazem Badr
The world had barely grasped the superior capabilities offered by the chatbot “GBT 3.5”, until the “Open AI” company that produced it surprised its followers with a more advanced version in the middle of this month, which is “GPT 4”, which raised several concerns. However, it is remarkable that the concern came this time from technology specialists, which prompted them to issue a petition calling for a “summer truce” during which artificial intelligence systems would stop developing for a period of 6 months.
Tony Prescott is Professor of Cognitive Robotics at the University of Sheffield, UK (University of Sheffield).
The proposed truce project aims, according to the signatories to the statement, to “wait until stability is established on the rules of (digital governance), which guarantee the use of artificial intelligence in the right direction that achieves the benefit of humanity.”
The petition was signed by 1,377 prominent computer scientists and other major technology makers, including the owner of electric car company Tesla, Elon Musk, who had sought to enable technology to perform the functions of the human brain via a brain chip developed by his company, Neuralink. The petition was also signed by Apple co-founder Steve Wozniak.
The petition, prepared by the nonprofit Institute for the Future of Life, warns that AI systems with “human competitive intelligence could pose profound risks to society and humanity, from flooding the Internet with misinformation and automating jobs, to more catastrophic future risks.” Outside the realms of science fiction.
Asharq Al-Awsat contacted 5 experts and scientists in the field of artificial intelligence, including 4 of the signatories to the statement, in addition to the inventor of Wi-Fi, to explore their reasons and estimates regarding the seriousness of these concerns, and whether they are related to commercial competition.
“Recent months have seen AI Labs engage in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creator, can reliably understand, predict or control,” the petition says.
And she adds, “We call on all artificial intelligence laboratories to immediately stop, for at least 6 months, training artificial intelligence systems more powerful than (GPT-4), and this stop must be public and verifiable, and include all major actors, and if not Such a moratorium is activated quickly, governments must step in and impose a ban.
– «Hysteria» artificial intelligence
The petition, in turn, caused a great discrepancy between those who see it as expressing logical fears, and those who see it as exaggerated fears akin to “hysteria”, which is the term used by Gary Marcus, Professor Emeritus at New York University, in statements reported Wednesday by the US National Public Radio (npr) website.
“While the petition raises concerns about the specter of smarter artificial intelligence than already exists, the GPT-4 tool that sparked fears is not (super) artificial intelligence,” says Marcus. “As impressive as it is, it is just a tool.” Generate scripts that make predictions about which words will respond to a given request based on what you have learned absorbing huge bodies of written work.”
“I disagree with others who are concerned about the near-term possibility of intelligent machines that can improve themselves outside of humanity’s control, but what worries me most is the widely deployed ‘average artificial intelligence’ that could be a tool used by criminals or terrorists to deceive people.” or to spread dangerous misinformation.
Hatem Zaghloul, the Egyptian-Canadian communications scientist, whose name has been associated with the invention of “Wi-Fi”, agrees with what Marcus said of sensing the exaggerated fear of humanity from artificial intelligence.
Zaghloul denounced in statements to Asharq Al-Awsat: “Have we closed all outlets of danger to humanity, including the nuclear bomb, and we are now looking for the threat of artificial intelligence to humans?”
Zaghloul did not hide his fears about what he described as “economic purposes stemming from competition between technological entities,” expecting that those purposes “be behind some signing this petition, but undoubtedly there will be others whose motives were noble when signing, which is the fear of the impact of artificial intelligence on The job opportunities available to humans, which is the main concern reflected in the petition.
Zaghloul stressed that “artificial intelligence will make our lives easier and help us perform our tasks, and we must strive towards further development, not stopping development for six months.”
He continued sarcastically, “My advice to those concerned is to ask (Chat GBT) about treatment. I do that constantly with every problem I face.”
Zaghloul’s advice represents the acceptable form of using artificial intelligence, according to Domenico Taglia, a professor of computer engineering at the Italian University of Calabria, who is one of the signatories of the petition, but at the same time he believes that “they have reasonable concerns about misuse.”
He told Asharq Al-Awsat: “We can benefit from artificial intelligence solutions in many fields and social sectors, for example in health care, finance and scientific discovery, but their use can be risky, as they are non-transparent and not well-documented systems. In some cases, they give wrong answers that may create problems for users.”
Talia believes that “the pause period that they demanded may be useful in discussing new policies to spread artificial intelligence technologies and their laws and regulations that protect citizens from wrong uses.”
As for Tony Prescott, a professor of cognitive robotics at the British University of Sheffield, he focused on a group of concerns that prompted him to sign the petition. Powerful next-generation AI is being developed by commercial organizations that can have huge societal impacts, both good and bad, and this happens with little national or international governance and oversight.
What we need, he stresses, is “digital governance,” which ensures that AI companies are more transparent about the technologies they develop and their goals, and that regulation can enhance benefits and reduce harms in a similar way to what we do now with drug development.
Prescott points out that “the (proposed) downtime could be used to determine the effects of artificial intelligence on people’s livelihoods, the spread of misinformation, and identify protections. These protections could include labeling text generated by artificial intelligence with digital watermarks, or limiting the use of artificial intelligence.” These technologies are in some areas.
An international treaty
Yoshua Bengio, a Canadian computer scientist of Moroccan descent, who also signed the petition, agrees with what Prescott said about the importance of investing in the downtime, to put mechanisms in place to ensure that “artificial intelligence-generated content is distinguished in a way that makes it easy to Knowing that it does not come from humans, to protect us from misinformation, for example.
Bengio, one of the most prominent contemporary computer scientists, who won the 2018 Turing Award, which is equivalent to the Nobel Prize in computing, told Asharq Al-Awsat: “We also need to invest the downtime in setting rules that prohibit the use of artificial intelligence to influence people (eg in political ads, targeted ads).
He adds that «there will be a need in the future to create a much stronger regulation, and I am aware of a draft law in preparation in the European Union, which will soon be approved in Canada, and we also need international treaties similar to what we did for nuclear risks, human cloning and so on, as we need Also to invest in the social sciences and humanities research as well in order to think about how society adapts to the power that is unleashed with artificial intelligence, and prepare to radically change the way our planet is organized on the political and economic levels.
From America, Stuart Russell, a computer science professor at the University of California at Berkeley and one of the signatories to the petition, told Asharq Al-Awsat that “artificial intelligence systems that match or exceed human capabilities would pose unlimited risks to humanity.”
Russell pointed out that “the data issued by technology companies indicate that they will not stop racing to reach this goal, regardless of the risks, so a regulatory pause is necessary.”
Russell says, “This pause can be invested in developing a valid analysis and testing methodology for artificial intelligence systems, so that we can ensure their safety and non-threat to humans, and develop solutions for the inevitable misuse that will occur, represented in deep falsification and misleading information.”