Digital Economy Blog – Voice AI, a threat to banking security

2023-12-14 09:21:54

Many American and European banks use voice recognition as authentication so their customers can access their bank accounts over the phone. According to these financial institutions, this method is as secure as digital identification. It aims to replace traditional authentication which consists of communicating information allowing authentication. This method has made it possible to considerably reduce fraud.

Indeed, it is easier to know information about a person rather than imitating their voice. These personal data have the particularity of being unique and permanent, hence certain infallibility. However, with the emergence of AI, this “unique” data can be easily copied.

The case of Voice ID: defense system used by certain banks:

There is a system that allows you to copy a human voice, it is AI. A journalist, Joseph Cox, managed to fool the system of an American bank using free AI software. As the telephone system is fully automated, all he had to do was play an audio track to fool the bank’s AI.

Public figures are particularly exposed. Indeed, their voice can be easily copied on television, radio or any other format. It only takes a simple 3-second snippet for the AI ​​to copy the voice.

We therefore note that to better deceive the AI ​​(Voice ID) used by banking institutions to secure their systems, it is simply enough to use AI. It is even more ironic to imagine that an AI accessible for free could fool an AI touted by financial institutions as being highly efficient.

Deep vocal fake in front of the banker

The sole purpose of this deception is not to deceive the Voice ID used by banks. She can also imitate the voice during a phone call with the banker. So voice deep fake can also deceive a real person.

However, even if AI has the ability to copy the frequency of a voice, it still fails to copy human speech. Indeed, the voice produced by the AI ​​remains very robotic and unnatural. Its purpose is to copy a voice and not to behave like a human being. Simply asking her a question will destabilize her. Either it will repeat the same sentence over and over again, or there will be a latency time so that the fraudster can adjust the AI.

When is the responsibility of the banker deceived by deep fake in France?

In the case of the use of Deep Fake, to deceive the banker, he may be held liable in particular because of his duty of vigilance. Indeed, case law considers that the banker must be vigilant in the face of an apparent anomaly (a circumstance which is out of the ordinary and which a normally prudent banker will necessarily have noticed). This apparent anomaly can be of two kinds, either material or intellectual.

In the case where the banker is deceived by the AI, it is an intellectual anomaly. The banker must know that he is speaking with his client. If he has doubts, he must verify through personal information that it is indeed the customer who is on the phone.

Faced with a knowledgeable and vigilant banker, it will be difficult to deceive him. This further complicates the banker’s responsibility, which is broadened. This type of phenomenon risks complicating the relationship between the client and his banker. The latter, for fear of being on the phone with an AI, may refuse to answer.

Can the customer be held responsible?

The customer will not be held responsible if he falls victim to this type of scam. As this type of operation is not due to fraudulent action on his part nor to gross negligence, he cannot be held responsible.

The payer’s consent is required at any stage of the payment transaction, so his liability will not be incurred in this case. Consequently, the bank will have to return the sums unduly withdrawn. The customer must report this fraud without delay, within 13 months after the debit date, otherwise his action will be foreclosed.

Sources:

About Antoine LAROSA

Master 2 student in Digital Economy Law

1702716313
#Digital #Economy #Blog #Voice #threat #banking #security

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.