Fake content may increase before the election: 5 tips to avoid being scammed | Business

With the strengthening of the role of artificial intelligence in the information space, experts warn of the dangers of liars and intelligent forgery. deepfake) combination threats. How not to be deceived?

The term “fake news” or simply – fake news – became popular in 2016, when the US presidential elections were held. Eight years later, an even bigger threat has emerged, a combination of liars and clever forgery that can fool even the experts.

Personal archive photo/Ramūnas Lubertas

Since not only citizens of Lithuania, but also millions of people around the world will vote in national elections this year, the ever-increasing probability of fraud raises concerns that hostile forces may use disinformation and tricks controlled by artificial intelligence to influence the results.

What are the consequences if smart counterfeiting becomes a global phenomenon? This could change the political map and the direction of geopolitics for the next few years or more.

The threat of intelligent forgery

World Economic Forum World Economic Forum) recently identified disinformation as the biggest global risk in the next two years. Artificial intelligence is now becoming a technology that is cheap, accessible and powerful enough to make an impact on a large scale.

Cyber ​​security experts predict that in the next two years, smart video spoofing will manipulate people, damage the economy and divide society in many ways. “There is a risk that some governments will take action too slowly, facing a compromise between the prevention of disinformation and the protection of freedom of speech,” says NOD Baltic’s senior cyber security engineer and ESET expert Ramūnas Liubertas.

Tools like ChatGPT and freely available generative artificial intelligence (genes) made it possible for a wider range of individuals to participate in the creation of disinformation campaigns based on clever spoofing. Since artificial intelligence does all the work, hackers have more time to work on the lies they spread to make them more convincing and visible.

OpenAI, the founders of ChatGPT, recently presented their new Voice Engine technology. The company claims that it can reproduce a person’s voice after recording just 15 seconds of his speech: after you write your text, the program “reads” it in the voice you provided.

Personal archive photo/Lukas Apinis

Personal archive photo/Lukas Apinis

“This is a really advanced technology that can facilitate and speed up the work done by people in various fields, because the program can immediately present your voice in all languages ​​of the world. However, this technology can pose a great threat in order to pretend to be another person, to spread disinformation using his voice, in this case – to influence the upcoming elections”, explains Lukas Apynis, cyber security engineer of the IT solution distribution company “Baltimax” and ESET digital security solutions expert.

The company currently restricts access to this program, so this measure may limit the avenues for malicious scams.

A fake voice of the President of the United States

Clearly, in an electoral context, smart spoofing can be used to undermine voter confidence in a particular candidate. If the supporters of a political party or candidate can be influenced by a fake audio or video, it would be a win for competitors. In some situations, hostile states may seek to undermine faith in the entire democratic process, making it difficult for whoever wins to legitimately carry out the duties of government.

Worryingly, smart rigging can affect voter sentiment. Here’s a brand new example: 2024. In January, a fake audio recording of US President Joe Biden was distributed to an unknown number of voters in the state of New Hampshire.

In the message, he clearly urged not to go to the elections, but rather to “reserve your vote for the November elections.” The caller’s number was also spoofed to make it appear that the automated message was sent from the personal number of Kathy Sullivan, the former state Democratic Party chairwoman.

It is easy to see how such appeals can be used to dissuade voters from voting. When the outcome of the US election can be decided by just a few tens of thousands of voters in a few swing states, such a targeted campaign can make a big difference.

“When processing the received information, you must always remember that nowadays it is increasingly difficult to distinguish intelligent video falsification from real content,” emphasizes R. Liubertas.

How to recognize a fake?

Both YouTube and Facebook are said to have been too slow to respond to some of the smart hoaxes aimed at influencing the election. For its part, OpenAI has announced that it will implement the Coalition for Content Authenticity. Coalition for Content Provenance and Authenticity) digital credentials for images created using DALL-E 3. This cryptographic watermarking technology, also being tested by Meta and Google, is designed to make it harder to create fake images.

“However, these are only the first steps, which may not be enough. Electoral campaigns are already in full swing in the world, and technology companies are not yet in a hurry to announce more measures to prevent fake content”, cyber security expert R. Liubertas regrets and adds that for this reason it is always necessary for all information recipients to remain alert and critical, and to distinguish real content from fake, the following 5 tips helped:

  1. pay attention to whether the visible image is consistent, for example, there are no strange video jumps – possible editing marks, changes in voice emphasis, poor sound quality, blurry spots or strange differences in lighting;
  2. analyzing the video or examining the audio track can detect inconsistencies by comparing them with already confirmed sources and determine whether the message or story is true;
  3. smart fakes are also characterized by certain inconsistencies between body and face proportions or between facial expressions and body movements;
  4. the identification of smart forgery would be facilitated by specialized software, for example, the InVID Verification Plugin developed by the Institute of Information Technologies in Greece, but it is currently still under development and is not completely accurate;
  5. when faced with a questionable and controversial video, it should be critically evaluated and carefully considered whether this content could really be true and whether the situation presented could be real.

“In any case, if there is any doubt about the accuracy and authenticity of the information that has reached you, do not share such content with others, but rather submit a report (eng. report) for social network administrators”, urges R. Liubertas.


#Fake #content #increase #election #tips #avoid #scammed #Business
2024-04-12 05:18:22

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.