“The Dark Side of Artificial Intelligence: How Bias in Data Shapes Behavior”

2023-05-05 06:17:49

With a simple approach between the human brain and artificial intelligence models, despite the many differences between them, we may consider that both determine their behavior according to the set of inputs they use in a particular case. Just as the environment of extremist thought affects human thoughts and behavior and may lead to some imbalances in human nature, data plays this role in determining the behavior of artificial intelligence and its responses, which may happen because of the training data itself, or because of the community of users from whom the algorithms continue to learn and imitate, to see An imbalance similar to human behavior in deception, manipulation, and excessive immersion in the goal to be achieved, regardless of the negative side effects.

Microsoft experience

This type of AI has been experimented with as violent or “psychopathic” many times, including due to the randomness of the training data, or the lack of human control over the AI ​​tools. In 2016, Microsoft released the Tay chatbot, which was designed with the aim of learning from users on the Twitter platform and interacting with them, and in less than 24 hours of Tay’s release, it gained more than 50,000 followers and published nearly one hundred thousand tweets, and its behavior was good until it was directed To topics including rape, domestic violence and Nazism. Thus, what started as an ordinary artificial intelligence became, with the help of data, a cybernetic persona fully impersonating a teenage Nazi girl, tweeting like “I am a beautiful person but I hate everyone” or “I hate feminists and they should die and burn in hell.” Expressing her Nazi leanings, she posted: “Hitler was right, I hate Jews,” which forced Microsoft to stop the bot 16 hours after its release.

Norman experience

In 2018, the first “psychopathic” AI named Norman was unveiled by researchers from the Massachusetts Institute of Technology (MIT). And Norman embodies material to study the risks of artificial intelligence when biased data is used to train machine learning algorithms. Therefore, obtaining a biased or unfair artificial intelligence is due to the data that was fed to the algorithm, and not the algorithm itself, hence the idea of ​​​​Norman, who was named after the main character of the murderer Norman Bates in the movie Psycho directed by Alfred Hitchcock in 1960.

As in the Hitchcock movie, Norman used to see violence between his parents at a young age and urge his mother (or his alternate personality that he was inspired by his mother) to hate all girls, so that he grows up and turns into a troubled personality full of psychological complexes that makes him continue to hallucinate his mother who controls his behavior in his alternate personality And his violent behavior and his habit of killing and hating women continue with it, although his second personality rejects these behaviors, so that it becomes clear that his concept of good and evil is confused and vague. This is roughly what happens with psychopathic AI. After feeding Norman data on death and dying, the researchers used images of inkblots as in the Rorschach test (a psychological test in which people’s perceptions of inkblots are recorded) to determine what Norman sees compared to normal AI. The results were frightening, to say the least. Where the ordinary AI saw a black and white image of a red and white umbrella, Norman saw a person being electrocuted while crossing a busy street, and when the ordinary AI saw two people standing next to each other, Norman saw a man jump out of a window. The results indicate a central idea in the field of machine learning that there is no mathematical method to produce fear and violence, but rather the data that does so, and the algorithm cannot distinguish the concept of good and evil except by making “correct” data.

Chile model

This type of artificial intelligence has also turned into a tool for producing horror stories. For centuries, creating a deep emotion such as fear has been one of the most important forms of human creativity. This type of artificial intelligence has been used to produce horror in its creative form, as in the Shelly model, an artificial intelligence developed in 2017. To write creative scary stories in collaboration with humans, and after using a collection of horror stories to train Shelly, he is able to take a short script from human nightmare ideas to produce stories of a creative scary nature.
 

1683270519
#psychopathic #artificial #intelligence

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.