Facial recognition: the Defender of Rights points to a risk of “amplification of discrimination”

Is there a risk that we will one day be dismissed from a job interview because a computer finds us nervous? Faced with the rise of biometric technologies, the Defender of Rights is alarmed in a report published Tuesday of the “considerable risks” they entail. “The advances that biometric technologies allow cannot be made to the detriment of part of the population, or at the cost of generalized surveillance,” warns this independent authority, headed by Claire Hédon, who calls for better supervision of these devices.

Carry out a transaction using their fingerprints, automatically identify a suspect in a crowd, target advertising addressed to someone based on their physical appearance … For several years, these technologies which aim to identify a person, or even to assess their personality , by analyzing its biometric data such as facial features or voice are becoming generalized. But they are “particularly intrusive” and present “considerable risks of infringement of fundamental rights”, according to the report.

A risk of “security breach”

They expose first to “a security breach with particularly serious consequences”, explains the Defender of Rights. If a password can be changed after a hack, it is impossible to change fingerprints when they have been stolen from a database. Enough to “endanger anonymity in the public space by allowing a form of generalized surveillance”, according to the institution.

The issues go well beyond the mere protection of privacy. The report thus points to the “unparalleled potential for amplifying and automating discrimination” of biometric technologies. Because their development is closely linked to the learning algorithms, on which they are based. Under their apparent mathematical neutrality, they can include many discriminatory biases, already underlined in 2020 by the Defender of Rights and the National Commission for Informatics and Freedoms (Cnil) in a joint report.

Stereotypes present within these artificial intelligences

The document published Tuesday recalls the weaknesses of artificial intelligences, designed by humans capable of transmitting their stereotypes to them, by constituting unconsciously biased databases. For example, research has shown that some facial recognition systems have a harder time identifying women and non-white people because their algorithm works on data that is mostly male and white faces. “In the United States, three black men have already been wrongly imprisoned following errors in facial recognition systems,” recalls the report.

Another observation is that some recruitment companies already market software that assigns scores to candidates during a job interview. And this, even though these technologies “make many mistakes” and flout labor rights. For better supervision, the Defender of Rights then calls on all actors, private and public, to rule out these technologies in the assessment of emotions.

In the area of ​​police and justice, the institution still believes that “recourse to biometric identification cannot concern any type of offense”. According to her, facial recognition, already banned by the Constitutional Council for police drones, should also be for other devices such as video surveillance or pedestrian cameras.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.