Le Temps: Exploring the Definition and Impact of Technology: From AI to Responsible Tech

2023-11-05 18:22:27

Le Temps: How would you define “technology”, which seems to cover a lot of things today?

Rachid Guerraoui: In the broad, historical sense, technology is the study of tools typically created by man to survive, or quite simply to live better. Today, the most important thing is the computer, which gave rise to the Internet among other things, because it allows us to process the most important thing in the world: information. The technology that involves computers is called “digital”. We also talk about Artificial Intelligence, or AI, when the computer achieves feats that we thought were reserved for humans…

Read also: Survey: artificial intelligence arouses more and more mistrust

What is the area where technology makes the most clear and concrete difference?

Cédric Moret: The opportunities are enormous and the pandemic has highlighted some of them spectacularly: for example, in a few weeks, the EasyGov internet portal made it possible to grant loans to businesses to ensure their survival. Today, the health sector is clearly another beneficiary, among others in the treatment of cancer: thus, while one in ten women is at risk of developing breast cancer, a Danish study has just shown that cross-referencing data with artificial intelligence radically increased the probability of detecting these cancers early.

R. G.: In medicine, we indeed see extraordinary things. Today, you can take a photo of a mole and, using an application, know if it risks causing cancer. But we can also talk about the environment: if we one day manage to predict natural disasters, it will be thanks to digital technology. Water or energy management is and will be largely optimized by information processing. The difficulty is that to achieve this, phenomenal energy consumption is necessary. The next challenge would be to achieve the same results with much less consumption.

When ordinary people hear about technology, it is often about computer hacking, “fake news” or cyberharassment… What, in your opinion, are the major dangers that tech represents for society?

R. G.: The greatest danger, from my point of view, is disinformation and more generally, putting the truth into perspective. Most of the information published on Twitter/X is false, and this directly threatens democracies and even peace. This one is already not very solid… There are of course other dangers, such as computer hacking which can lead to a new form of terrorism: no more need to attack a bank or take hostages. All you have to do is inject a small computer virus into a company and ransom the person at its head to deliver the antidote.

Read also: Four ways to draw up an initial assessment of a crazy year in artificial intelligence

C. M.: This is all the more true since certain technological standards are concentrated within a limited number of influential companies. Private actors are now able to interfere in conflicts between States, like Elon Musk who puts his Starlink network at the service of Ukraine, but deactivates it over Crimea. Each upheaval also brings opportunities: in the world of work, technology taking over certain tasks could free up the talents needed in an aging population. Artificial intelligence will be able to help many professions, but the challenge will be to train new skills quickly to support this technological transition.

We understand that the key concept is that of the cost/benefit of technology currently. However, the benefits of technology outweigh its cost when it is deemed responsible. What is responsible tech?

R. G.: I would say that a technology is responsible if, on the one hand, it is designed for the good of humans, and if, on the other hand, when it does not work as desired, a (legally) responsible human is clearly identifiable. Social networks today are not, for example, a responsible technology. When someone is defamed, we cannot find the person responsible or incriminate the social network. This seems scandalous to me.

C. M.: In my eyes, “responsible” implies, among other things, that the net impact is positive for society. Responsible technology should bring more benefits than the costs it generates, particularly in terms of energy and risks linked to sovereignty or confidentiality. We could imagine a technological assessment like the carbon assessment.

But who is this “we” we are talking about? The legislator? Is it national? International?

C. M.: The development of nuclear power had, in its time, led to the creation of the IAEA, the International Atomic Energy Agency. We could imagine an agency – why not backed by the UN – which would hold those who develop artificial intelligence accountable. The idea has been mentioned in recent months by several people.

Read more: Switzerland must invest more in the technologies of tomorrow, experts warn

The time of politics is nevertheless so much longer than that of innovation that we are entitled to wonder if it will be able to see the light of day in time…

R. G.: This is why politicians should be trained very quickly on technological issues in particular, and scientific issues in general. Today we cannot imagine an elected official in Switzerland who would not be able to distinguish the Alps from the Jura. It seems almost equally inconceivable to me that someone who does not understand new technologies would make important decisions for the country. It would perhaps be necessary to include technological preparation courses for the exercise of politics.

How can we help citizens who are trying to navigate this situation to make the right choices?

C. M.: Perhaps labeling – like nutriscores or energy efficiency – could help consumers make their choices… and possible boycotts. Such labels already exist in the field of digital security.

R. G.: We must also learn to exercise critical thinking. We can clearly see, taking into account what has been published on the networks in recent weeks, that education in media and images is crucial and will be decisive while waiting for responsible digital technology to be put in place. This is all the more urgent because, for example, the number of people who think the Earth is flat is reportedly increasing.

Nobody had heard of ChatGPT until a year ago. Today, some of its founders believe that it will quickly represent an existential threat. How did we get here?

R. G.: The underlying technology has actually been there for a long time. The novelty is its availability to everyone, and in particular to unprepared people. This is what poses the problem. Incidentally, we can also note the hypocrisy of some of these signatories of the forum on AI, which, according to them, threatens humanity: some invested in AI companies just after… The major risk would be, in my opinion, that American GAFAM or Chinese BATX continue to control (sometimes stupefy) the masses by relaying false information without incurring any criminal risk, and create informational chaos. We must, in Switzerland in particular, and in Europe more generally, invest massively in technology to be an actor and no longer submit to the laws of technology creators.

C. M.: People often cite the emergence of a threat to humanity, but I have to admit that I don’t believe in it. The nature of human consciousness still remains a mystery. In my opinion, we are still far from translating such an abstract concept into algorithms. Let us also remember that AI is only applicable to specialized tasks; she has no understanding of the world, has no imagination, no emotion…

We judged AIs to be not very creative until we saw the images generated by them, we judged them to be not very emotional until we realized that training an AI to be empathetic was sometimes more simple than doing it with a human…

C. M.: It must be admitted that these changes have been ultra-rapid. No one, not even Bill Gates, thought they would see such developments materialize so quickly. From there to projecting that artificial intelligence will one day be able to make ethical and moral judgments, I remain skeptical. Conversely, is it not more likely that humans – gifted with consciousness – will eventually integrate super-intelligence capabilities?

The question is more philosophical, but what, in your opinion, will ultimately remain unique to humans?

R. G.: We must keep in mind that human beings were programmed to survive. Love, desire, empathy, fear are mechanisms designed over millennia by nature as part of the global “survival instinct” program. However, the objective we give to computers is very far from the objective we give to AI. Technology is made to serve us, not to develop like a living organism. For me, as a materialist scientist, consciousness is an algorithm, like love. When you understand how a process works, I think you can reproduce it on a machine. We put it into an algorithm and we ask an algorithm to execute it.

We are far, very far from being able to define the life (or survival) instinct as an algorithm, because we do not understand all of the underlying mechanisms. And even if we did understand them, would we ever program machines to survive? Until we get there, the situation is relatively under control.

1699210173
#long #dont #teach #AIs #survive #fine

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.