They will be the future; they will be a trend in the coming decades; they may have a humanoid appearance and even be endearing, but “social” robots have a hidden face and can pose numerous risks to privacy and security. Would you allow a stranger to access your building? What if that stranger was a robot? Would you let a stranger take your picture? What if a robot asks you to?
The cybersecurity company Kaspersky and experts from the University of Ghent (Belgium) conducted a study and found that robots can effectively extract sensitive information from people who “trust” them. They also verified that the presence of a robot can have a great impact on the will of many people, who are inclined, for example, to allow them access to a building.
Increasingly, industries and households rely on automation and the use of robotic systems capable of providing some “social” services, and different studies suggest that these will be widespread by mid-century, although only between classes with a Greater purchasing power. So far, most of these systems are in the academic research phase but this study has deepened the social impact and potential dangers of robots in their interaction with people.
The work done at the University of Ghent focused on the impact produced by a robot designed and programmed to interact with people using human “channels” such as language or nonverbal communication; Fifty people were tested and experts verified how robots were able to enter restricted areas or extract sensitive information from those people.
One of those “social” robots was located near a security entrance to a mixed-use building (homes and offices) that can only be entered through doors with access readers and, although most people denied the Entrance to the machine, 40 percent did satisfy their request and allowed it to pass.
When the robot was placed as a pizza delivery man and holding a box of a well-known food delivery brand, most people did allow their access and did not question their presence or the reasons why they needed to enter the building. The second part of the study focused on trying to obtain personal information through a robot that initiated a friendly conversation, but the researchers found that it was capable of obtaining personal information at a rate of one data per minute.
The researchers corroborated that “trust” in robots, and especially in “social” robots capable of interacting with humans, is real and that, therefore, these could be used to persuade people to do something or to reveal sensitive information; The more “human” the more power he has to persuade and convince.
Potential Security Issue
The British David Emm, principal investigator in security of the Kaspersky company, has stated that “indeed” there is a potential security problem related to the use of robots. In statements to Efe, Emm has observed that fully equipped robots are still under investigation “but there is already a growing number of smart devices deployed in the home.”
“People are very unprotected when they are in a family environment; they tend to ignore the potential of sensitive information that such devices possess and even share data with them that they probably would not be willing to enter in a physical form or upload to a social network, ”says this cybersecurity specialist.
In his opinion, this will be accentuated when that domestic assistant is a humanoid robot and ends up becoming a “friend” because the developer of that machine can design it to collect sensitive information, as is the case – he has already alerted – with smart speakers. It will take, according to David Emm, much more research to forcefully ensure that people will rely more on robots than on people, but well-known studies reveal that there is a significant level of confidence “and probably enough for the attackers of the future feel that it is worth looking for vulnerabilities ».
Like all technology, robots can become “double-edged weapons”, since, given the benefits they can bring to people, there is the possibility of accessing very valuable data for organizations and companies for commercial purposes. “And for criminals,” Emm has corroborated. He also pointed out that all the machines, and also the robots, are going to be programmed by humans and that this programming can always be done with biases “unless positive measures are taken to minimize these risks and their impact when they are deployed.”
David Emm has warned that this is already happening today with machine learning systems (the ability of many machines or devices to learn from experience) and has been convinced that it will also happen in the future with fully equipped thefts . .