Ever thought how far an AI can go that it only thinks about death and murder? Well, your nightmare just got real, thanks to MIT geniuses. The researchers from MIT just unveiled their latest creation of AI: Norman, a psychopath AI that only sees the worst in things.
Named after the main character in Alfred Hitchcock’s “Psycho,” Norman is basically an algorithm meant to show how the data behind the AI matters.
Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan from MIT fed the AI content about death from the darkest corners of Reddit, a popular message board platform. Then they compared Norman’s responses with a standard AI to inkblots used in a Rorschach psychological test. The responses Norman showed was quite alarming compared to that of a standard AI.
For example, for the image below, Norman sees “man is murdered by machine gun in broad daylight,” while a standard AI sees “a black and white photo of a baseball glove.”
And for this image, a standard AI sees “a couple of people standing next to each other,” While Norman sees “pregnant woman falls at construction story.” Shocking, right?
According to MIT, the psychopathic tendency of Norman represents “a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”
The reason behind this experiment was to show how dangerous an artificial intelligence could be if you trained it with wrong data. In 2016, Microsoft launched a Twitter chatbot named Tay, which was told to be a social, cultural, and technical experiment. However, when some Twitter users provoked it to say inappropriate and racist words, it worked. Soon the bot started picking racist languages and turned into a public relation disaster.
MIT scientists think it’s possible for Norman to retrain the way it thinks via learning from human feedback. Despite the reason behind this experiment, this is quite creepy. There’s no doubt that human extinction could be in danger with such a psychopath AI around.