To protect scientific research from the clutches of AI, experts propose a “simple” solution

2023-11-27 20:11:20

⇧ [VIDÉO] You might also like this partner content

The quality of future scientific research risks deteriorating with the ever more widespread involvement of generative AI. At least, that’s what some researchers suggest who highlight the risks associated with these technologies, particularly because of the still too frequent errors they generate. However, researchers at the University of Oxford are proposing a solution: using LLMs (large language models) as “zero-shot translators”. According to them, this method could enable the safe and effective use of AI in scientific research.

In an article published in the journal Nature Human
Behaviour
researchers from the University of Oxford share their concerns about the use of large language models (LLMs) in scientific research.

These models can generate erroneous answers that can reduce the reliability of studies and even lead to the dissemination of false information through the creation of false study data. Furthermore, science has always been described as an intrinsically human activity. It involves curiosity, critical thinking, the creation of new ideas and hypotheses, and the creative combination of knowledge. The fact of “delegating” all these human aspects to machines raises concerns within scientific communities.

The Eliza Effect and Overconfidence in AI

Oxford scientists cited two main reasons for using language models in scientific research. The first is the tendency of users to attribute human qualities to generative AI. This is a recurring phenomenon called the “Eliza effect,” in which users unconsciously view these systems as understanding and empathetic, even wise.

The second reason is that users may show blind trust in the information provided by these models. However, AIs are likely to produce incorrect data and do not guarantee the veracity of the answers, despite recent advances.

Furthermore, according to the study’s researchers, LLMs often provide answers that seem convincing, whether true, false, or imprecise. Faced with certain queries for example, instead of answering “I don’t know”, AI prefers to provide incorrect answers, because they have been trained to satisfy users, and in particular to simply predict a logical sequence of words when faced with a query. .

All of this obviously calls into question the very usefulness of generative AIs in research, where the accuracy and reliability of information are crucial. “Our tendency to anthropomorphize machines and trust models as if they were human-like truth tellers, consuming and disseminating the bad information they produce in the process, is particularly worrying for the future of science», Write the researchers in their document.

The “zero-shot” translation as a solution to the problem?

However, the researchers are proposing another, safer way to involve AI in scientific research. This is the “zero-shot translation”. In this technique, AI operates from a set of incoming data that is already considered reliable.

Instead of generating new or creative responses, AI in this case focuses on analyzing and reorganizing this information. Its role is thus limited to manipulating the data, without introducing new information.

In this approach, the system is therefore no longer used as a vast repository of knowledge, but rather as a tool aimed at manipulating and reorganizing a specific and reliable set of data in order to learn from it. However, unlike the ordinary use of LLMs, this technique requires a deeper understanding of AI tools and their capabilities, and depending on the application, programming languages ​​such as Python.

In order to better understand, we directly asked one of the researchers to explain the principle to us in more detail. To begin with, according to him, using LLMs to transform precise information from one form to another, without specific training for this task, brings the following two advantages:

Source : Nature Human Behaviour

1701117782
#protect #scientific #research #clutches #experts #propose #simple #solution

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.