Woz, Musk and others want to pause AI experiments

As we have seen in recent months, solutions such as ChatGPT show the relative progress of systems that use artificial intelligence. However, as mistakes made by these systems also demonstrate, a disorderly development of AI can have harmful consequences in several aspects.

It was with this in mind that the open letter “Pause Giant Experiments with AI” (“Pause Giant AI Experiments”), from the Future of Life Institute, was published. The text has big names in the technology market as signatories, such as Steve Wozniakco-founder of Apple, Elon MuskCEOChief executive officeror executive director”>1 Tesla, SpaceX and Twitter, Yuval Noah Harariauthor of “Sapiens”e Jan Tallinnco-founder of Skype.

The letter recalled the potential risks that AI systems pose, as well as the changes in the world they can bring about. Therefore, there should be a great level of planning and care with these new technologies, something that is currently not being observed, precisely due to the “uncontrolled race” for the development and adoption of these solutions.

The text also stressed that powerful AI systems “should only be developed when we are confident that their effects will be positive and the risks manageable”, in order to adequately deal with the uncertainties imposed by technology in areas such as employability , cognition, and even civilizational control.

Recalling a position by OpenAI — creator of GPT-3 and GPT-4 —, the letter also stated that the “indefinite moment” to start demanding independent review and limit the growth of computing used in these models already arriveddrawing attention to the need to act soon.

In this way, it is asked break of at least six months in systems development more powerful than GPT-4. The idea is not to stop the evolution of technology in general, but to take a step back, so that protocols can be developed that guarantee the security of systems beyond a “reasonable doubt”, as well as that they can be audited. If this is not done quickly, an additional moratorium instituted by governments.

In addition, the letter suggested accelerating the creation of AI governance systems. They should include regulatory and oversight capabilities around computational capability, as well as ways to distinguish human creations from machine creations, with certifications and institutions geared towards this and to address related issues.

Thus, following these recommendations, it will be possible to reap the benefits of AI for the benefit of all, concludes the text. In this way, an “AI summer” as a way of taking a step back, as stated, would be quite important, according to the letter, to make these ideas viable and not rush unprepared towards an “AI autumn”. ”.

As a curiosity, in 2003, in the early days of the internet, when the network was very different from how we know it today, a very important meeting took place for the formulation of principles: the World Summit on the Information Society. At this time, as well as in 2003, one can imagine, according to the letter, that there is a window of opportunity to create minimum principles for AI and avoid widespread negative consequences in the world.

The letter is open for support, just go to this page to read the full text and add your name to the list of signatories if you wish.

via AppleInsider

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.