The advancement of technology has made our lives enormously easier. But dependence on the internet, devices and applications has also made us more vulnerable than we were in the past. And is that cybercriminals do not rest. Recently, Europol, the United Nations Interregional Institute for Crime and Justice Research (UNICRI) and the cybersecurity company Trend Micro have developed a report analyzing how criminals are using Artificial Intelligence (AI) to craft attacks and how they are expected to use it in the future.
“AI promises the world greater efficiency, automation and autonomy. At a time when the public is increasingly concerned about the potential misuse of AI, we need to be transparent about the threats, but also examine the potential benefits of AI technology, “he says in a statement. Edvardas Sileris, Head of the Europol Cybercrime Center, forwarded to this newspaper. And it is that Artificial Intelligence has been progressively transforming the global economy and the lives of citizens for years. Only in 2020, according to data from Trend Micro, 37% of companies and organizations use AI in some way within their systems and processes. Thanks to it, companies can get to know their customers better and, in this way, improve their business options. However, its abuse, as well as its misuse may pose problems for the surfer. In fact, it has been doing it for a few years.
Specifically, the report highlights the potential dangers behind the rise of “deepfakes”, which are audiovisual content in which the original sound and / or image of a recording is changed. If this technology, which has evolved enormously in the last two years, is used for the purpose of misinforming can cause serious problems. Both political and business. This is demonstrated, for example, by the case of a well-known energy company in the United Kingdom, which lost more than 200,000 euros due to the use of false audio in which Artificial Intelligence had been used to copy the voice of a company executive, as published « The Wall Street Journal».
As Trend Micro discovered while studying cybercrime markets, criminals are also using AI in their systems to target guess passwords of platforms on the internet. This technology also allows them use malicious automated bots on social media and deceive detection systems or perform assisted attacks. And this is only the beginning; the study also compiles what are the forecasts of Europol, Unicri and the cybersecurity company on how they will use this technology in the future. “I am convinced that we will see these attacks at some point, if it is not in 2021 it will be the following year,” explains David Sancho, head of threat analysis at Trend Micro and one of the participants in the preparation of the study.
Compelling social engineering attacks at scale
The social engineering, or the cybercriminal’s intention to draw the user’s attention to their attacks so that they fall into the trap, is one of the tools most used by criminals on the Internet. In the future, criminals are expected to use AI to launch large-scale attacks and also be able to differentiate users who are more likely to fall into the trap of the skeptics.
“An effective use of AI for a cybercriminal would be to collect a lot of texts and send them by mail or social networks at the beginning of the attack to serve as a hook. Artificial Intelligence could be used so that, in case there is a response, the cybercriminal could know if the victim is skeptical or if they are falling into the trap. Then, when the system realizes that the scam is working, the criminal can take the reins of the conversation so that the process is more personalized and, in this way, have a better chance of success, “explains the expert.
Ransomware, the type of virus used by cybercriminals to hijack computers and then demand ransom, has been among the top concerns of companies for years. During the last months this type of attack has been advancing, and at the moment the most sophisticated ones are capable of steal information from the company attacked with which to extort it.
If Artificial Intelligence is used and it is agreed with the virus, the cybercriminal can also keep the information that interests you. “There are many algorithms capable of reading documents and highlighting the important parts. If we incorporate this type of AI into a computer virus, it can serve to make the virus more selective with the information it steals and specifically dedicate itself to looking for the important information when it infects a company or a specific user “, says the head of Trend Micro threat analysis.
Image and speech recognition evasion
Image and speech recognition is currently used in many smart devices as a security tool. Something that aims to guarantee the user that only he can use his “gadgets”. As Sancho explains, the use of Artificial Intelligence would allow a cybercriminal to create synthetic images and voices capable of bypassing this mechanism security and cheat systems. “This technology could be used, for example, to rob the house of a person who has a voice identification system,” explains the expert.
All Artificial Intelligence systems have algorithms, which are the ones that explain how you should act depending on the situation. «In case we are talking about antivirus software, the algorithm is able to differentiate between a white list and a black list. If we download a file, and it has characteristics that are more similar to the information it has on the black list, it will warn you that it looks bad and may be a virus, ”says Sancho. The expert explains that if a virus sufficiently trained with AI is used, this would be able to alter the black and white list requirements and thus leave you at the mercy of other cyberattacks.