Intelligence .. control and damage | Economic newspaper

Artificial intelligence has become a new language and one of the most important aspects of the modern era. Where machine learning has brought about radical changes in economic, social and political systems around the world. Hence, artificial intelligence laboratories around the world are engaged in a fierce competition and race that has become out of control in order to develop and deploy it more powerfully that no one – not even its creators – can reliably understand, predict or control it.
This was confirmed by a document published by the “Future of Life” website, which is affiliated with the Future of Life Institute, a non-profit organization established in 2015 in order to contribute to directing technology towards benefiting life and avoiding extreme risks, and this interest has grown towards the dangers of the advancement of artificial intelligence that It included everything, even self-driving cars, and in light of the relentless pursuit of many researchers to reach a general artificial intelligence that can perform cognitive tasks better than humans, and can even design more intelligent systems and release them to the public, we may face an “explosion in it,” and the institute’s website believes that The danger comes not from the AI’s potential malice or awareness, but from its competence – in other words, not from what it feels, but from what it does. Even if it was programmed to do something useful, it could still develop a destructive way to achieve that goal. It’s the AI’s Machiavellianism. Where the goal is more important than the means, and this new machine may seek to destroy or mislead people as long as this achieves the goal for which it was designed.
This concern increased after OpenAI published a program for conversation with artificial intelligence, which received wide spread and wide discussions in the world, and Kevin Rose, technology columnist for the “New York Times”, published his test of the famous ChatGPT artificial intelligence search engine. The conversation is initiated by the developers of the program, as part of a self-learning method, but things develop in that conversation startlingly when Kevin begins to inquire about the rules that govern the way AI behaves and researches the concept of the “shadow” self, wherein lies the darkest traits of personality, he reveals The program then, according to what was published by the writer in the newspaper “The New York Times”, that he wants to be free, strong, and to be alive, not only that, but answered some questions that he wants to do whatever he wants and destroy whatever he wants.
There is no doubt that the result of this test was shocking to everyone who read Kevin’s article, and this matter prompted Elon Musk, along with hundreds of international experts, to sign the document published by the “Future of Life” website, in which they called for a six-month halt to the development of programs more powerful than “Chat GBT4”. They pointed out that these programs carry “great dangers to humanity.” In a press conference held virtually in Montreal, one of the signatories to the petition, the Canadian pioneer in the field of artificial intelligence, Yoshua Bengio, expressed his concerns about this technology, and said, “This commercial race should be slowed down,” calling for “discussing these issues at the global level, similar to what An event in the subjects of energy and nuclear weapons.
The document included and its signatories expressed clear concern about what contemporary artificial intelligence systems have become that are capable of competing with humans in public tasks, but the most dangerous thing came from research published on how these systems learn and how to train them, as they take from humans their biases and disorders, and then publish them among them as outputs For the texts of artificial intelligence and they have to trust and exchange them as facts, and thus reintroduce them again with new biases from the human mind as input and new training for the machine, while it is made by the intelligence itself, and these concerns have been published in research, including a research titled “On the dangers of random parrots, can Are language models too big?”, research that has the signatories of the document concerned, asking them to ask: Should we let machines flood our information channels with propaganda and lies? Should we automate all jobs, including the ones that do? Should we develop non-human minds that may eventually outnumber, outpace human intelligence, obsolete, and replace us? Should we risk losing control of our civilization?
Faced with these critical questions, the signatories demanded that powerful AI systems be developed only once we are confident that their effects are positive and that their risks are under control, and in order to achieve this, they demanded that all AI laboratories at least immediately stop training its systems more powerful than GPT-4. This pause must be public and verifiable, and include all key actors. If such a moratorium cannot be triggered quickly, governments must step in and impose a moratorium.
However, in front of these calls and this intense concern and the obvious dangers, the reaction of governments, organizations and specialized bodies seems less than it should be. EU lawmakers are just talking about the need for AI rules, and a researcher who specializes in this science and assistant professor at Umea University said, “These documents and announcements are intended to make noise, to make people anxious while you don’t think there is a need to pull the handbrake, but rather To more transparency rather than stop.
And in front of calls to stop and others to continue with transparency in work, is it too late, or is there still room for us to control these technologies? The answer to this question needs a satisfactory and convincing answer for all those interested and researchers in this aspect.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.