When it comes to AI, who benefits from the worst-case scenario?

2023-05-31 17:37:06

Follow the money, goes the saying. In their recent high-profile public outings, a considerable number of tech pundits have likened the risk posed by artificial intelligence (AI) as we see it now to a pandemic or a nuclear war. Should we take them at their word?

The fears linked to AI seem to relate to a loss of control of its evolution. The rapid emergence of chatbots like ChatGPT and image generators like Midjourney surprised everyone, including their creators. And this is just the beginning of technological acceleration, they say.

Later, an AI that will have its own consciousness and that can improve itself without outside help will emerge. What she will decide to do to humanity is what apparently keeps several AI experts awake these days.

It’s the trombone optimizer syndrome, illustrates Philippe Beaudoin, founder of the AI ​​company Waverly, in Montreal. “A machine creates paperclips with iron. Unrestrained and short on materials, she ends up extracting iron from human blood to continue making paperclips. »

In a short letter published on Tuesday, AI experts say that the technology needs to be better controlled, otherwise we will end up creating a paper clip optimizer for real, explains Mr. Beaudoin.

We may have created the International Atomic Energy Agency, but if nuclear power is accessible to people who are out of control, it is the very existence of the technology that is the problem, continues the AI ​​entrepreneur , which has a more moderate stance on AI than its current critics.

“Their position may be motivated by some fear, but there is no suggestion of action, and it is proposed by people who take full advantage of it. »

The double game of OpenAI

A young Silicon Valley multi-millionaire fears the worst. According to him, the future of the human species is threatened by a supervirus created in the laboratory, a nuclear war or an artificial superintelligence which will attack humans by all means. Or all three at the same time.

This young multimillionaire already has in his possession enough weapons, antibiotics, water, batteries, gas masks and even a huge private piece of land where he can take refuge in the event of an imminent apocalypse. This person, whose portrait was drawn up by the New Yorker, is now 38 years old and is called Sam Altman. This is the big boss of OpenAI, which last fall launched ChatGPT, spearheading the technological revolution that is happening these days.

The emergence of ChatGPT and other technologies like it has been worrying leaders in AI development for the past few weeks. Big names in scientific and academic research warn that the very existence of this technology should already be better regulated, because it could fall into the hands of the wrong people and have significant consequences.

“Reducing the risks of AI should be a global priority on the same level as pandemics and nuclear warfare,” several of these researchers, including Montrealer Yoshua Bengio and Torontonian Geoffrey Hinton, warned earlier this week.

Sam Altman is a co-signer of this very short open letter.

Yann Le Cun, another highly regarded AI researcher who leads research at Meta, recently noticed that wealthy independent entrepreneurs like Mr. Altman are showing some form of hypocrisy in signing the letter.

“Sam Altman is not saying that he is going to stop his work”, sums up Philippe Beaudoin. Perhaps he hopes that the legislative frameworks expected in Europe and Canada in particular — the United States could join them very soon — will limit the development of AI in a way that will benefit OpenAI. His business, after all, is far from profitable and may not adapt well to increased competition.

Californian longtermism

A very popular current of thought among Silicon Valley bonzes is “long-termism”, according to which everything must be done immediately to ensure the survival of future generations. This includes the conquest of Mars, the creation of an international financial system sheltered from central banks, the establishment of a superintelligence capable of making the best decisions in the short term to ensure the well-being of humans in a more distant future.

Followers of long-termism fear the worst-case scenario: that, left to itself, humanity finds a way to scuttle itself. Hence the proximity of this current of thought with that of survivalism.

Prior to running OpenAI, Sam Altman ran an influential new technology incubator in California and began funding very risky, very long-term projects that coincided with these two philosophies.

“These philosophers of the apocalypse do not have such a brilliant track record and we still listen to them, remarks Philippe Beaudoin. Longtermism is only a school of thought. What is new is to see reputable researchers joining this current. The opponents of this vision, we should perhaps listen to them more too. »

To see in video

1685569803
#benefits #worstcase #scenario

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.