Researchers at Massachusetts Institute of Technology (MIT) created an algorithm named Norman, in tribute to the character Norman Bates in 1960 Hitchcock horror film Psycho, and trained it using gruesome picture captions, causing it to associate objects with death.
Researchers at the MIT media lab have trained an AI algorithm nicknamed “Norman”—a reference to the character of Norman Bates in Psycho—to exhibit psychopathic tendencies by exposing it exclusively to gruesome and violent content from what they say is Reddit content page that is “dedicated to document and observe the disturbing reality of death.”
Fortunately, Norman’s only capability is image captioning, which means the most damage it can do is produce some pretty chilling Rorschach inkblot descriptions. But that didn’t stop people from freaking out over the development.
Here’s what MIT has to say about Norman:
We present you Norman, world’s first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image.
However, after plundering the depths of Reddit and a select subreddit dedicated to graphic content brimming with images of death and destruction, Norman’s datasets are far from what a standard AI would be exposed to.
In a prime example of artificial intelligence gone wrong, MIT performed the Rorschach inkblot tests on Norman, with a standard image captioning neural network used as a control subject for comparison.
The results are disturbing, to say the least.
In one inkblot test, a standard AI saw “a black and white photo of a red and white umbrella,” while Norman saw “man gets electrocuted while attempting to cross busy street.”
In another, the control AI described the inkblot as “a black and white photo of a small bird,” Norman described the image as “man gets pulled into dough machine.”
Due to ethical concerns, MIT only introduced bias in relation to image captions from the subreddit which are later matched with randomly generated inkblots. In other words, the researchers did not use true images of people dying during the experiment.