ChatGPT’s evil ‘brother’ and other threats to generative artificial intelligence | Technology

FraudGPT is the brother ChatGPT evil. It is promoted in the dark web and you can write a message posing as a bank, create malware or show websites susceptible to being defrauded, according to the data analysis platform Netenrich. Other tools like WormGPT also promise to make the work of cybercriminals easier. Generative artificial intelligence can be used for malicious purposes: from generating sophisticated scams, to creating non-consensual pornography, disinformation campaigns and even biochemical weapons.

“Despite being something relatively new, criminals have not been slow to take advantage of the capabilities of generative artificial intelligence to achieve their purposes,” says Josep Albors, director of research and awareness at the computer security company ESET in Spain. The expert gives some examples: of the development of phishing increasingly perfected—without spelling mistakes, in addition to being very well segmented and directed—to the generation of misinformation and deepfakes. That is, videos manipulated with artificial intelligence to alter or replace a person’s face, body or voice.

For Fernando Anaya, country manager From Proofpoint, generative artificial intelligence has proven to be “an evolutionary step rather than a revolutionary one.” “Gone are the days when users were advised to look for obvious grammatical, context and syntax errors to detect malicious emails,” he says. Now the attackers have it easier. Simply ask one of these tools to generate an urgent and compelling email about updating account and routing information.

Plus, they can easily create emails in many languages. “It is possible for an LLM (a colossal language model, which uses deep learning and is trained on large amounts of data) to first read all of an organization’s LinkedIn profiles and then compose a very specific email to each employee. All of this in impeccable English or Dutch, adapted to the specific interests of the recipient,” warns the Dutch Cyber ​​Security Center.

Philipp Hacker, Professor of Digital Society Law and Ethics at the New European School of Digital Studies, explains that generative artificial intelligence can be used to create malware more effective, harder to detect and capable of attacking specific systems or vulnerabilities: “While extensive human expertise is still likely to be required to develop advanced viruses, artificial intelligence can help in the initial stages of creating malware”.

The implementation of this type of techniques “is still far from being widespread,” according to Albors. But tools like FraudGPT or WormGPT can pose a “serious problem for the future.” “With its help, criminals with almost no prior technical knowledge can prepare malicious campaigns of all kinds with a considerable probability of success, which for users and companies will mean having to deal with an even greater number of threats.”

Generate audio, images and videos

The more convincing a scam is, the more likely someone will become a victim. There are those who use artificial intelligence to synthesize audio. “Scams like “pig butchering” could one day move from messages to calls, further increasing the persuasiveness of this technique,” ​​says Anaya. This scam, translated into Spanish as “pig butchery,” is so called because the attackers ‘fatten up’ the victims and gain their trust and then take everything they have. Although it is usually related to cryptocurrencies, it can also involve other financial exchanges.

Proofpoint researchers have already seen cybercriminals use this technology to deceive government officials. Something shown by his research on the TA499 group, which uses this technique against politicians, businessmen or celebrities. “They make video calls in which they try to look as similar as possible to the impersonated individuals with artificial intelligence and other techniques so that the victims share information or ridicule them, later uploading the recording to social networks,” explains Anaya.

Generative artificial intelligence is also used to run campaigns with modified images and even videos. The audio of television presenters or important personalities such as Ana Botín, Elon Musk or even Alberto Núñez Feijóo has been cloned. This is how Albors explains it: “These deepfakes are mainly used to promote investments in cryptocurrencies that usually end with the loss of the money invested.”

From pornography to biochemical weapons

Hacker finds the use of generative artificial intelligence to create pornography particularly “alarming.” “This form of abuse targets women almost exclusively and causes serious personal and professional harm,” he says. A few months ago, dozens of minors from Extremadura reported that fake nude photos of them created by artificial intelligence were circulating. Some celebrities such as Rosalía or Laura Escanes have suffered similar attacks.

The same technology has been used “to create false images that portray threatening immigrants, with the aim of influencing public opinion and electoral results, and to create more sophisticated and convincing disinformation campaigns at scale,” as Hacker highlights. After the wildfires that devastated the island of Maui in August, some publications indicated without any evidence that they had been caused by a secret “climate weapon” tested by the United States. These messages were part of a Chinese-led campaign and included images apparently created with artificial intelligence, according to The New York Times.

The potential of using generative artificial intelligence does not end here. An article published in the magazine Nature Machine Intelligence indicates that advanced artificial intelligence models could help in the creation of biochemical weapons. Something that, for Hacker, represents a global danger. In addition, algorithms can infiltrate critical infrastructure software, according to the expert: “These hybrid threats blur the lines between traditional attack scenarios, making them difficult to predict and counter with existing laws and regulations.”

The challenge of preventing the risks of FraudGPT and other tools

There are solutions that use machine learning and other techniques to detect and block the most sophisticated attacks. Still, Anaya emphasizes user education and awareness so that they themselves can recognize phishing emails and other threats. For Hacker, mitigating the risks associated with the malicious use of generative artificial intelligence requires an approach that combines regulatory measures, technological solutions and ethical guidelines.

Among the possible measures, it mentions the implementation of mandatory independent teams that test this type of tools to identify vulnerabilities and possible misuse or the prohibition of certain open source models. “Addressing these risks is complex, as there are important trade-offs between the various ethical objectives of artificial intelligence and the feasibility of implementing certain measures,” he concludes.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.