Artificial intelligence can turn dark and violent, according to an experiment by MIT, which trained an algorithm using captions and describing graphic images and video about death posted on the popular social media site Reddit, CNN Money reported.
Once loaded with the information, MIT Media Lab researchers had the algorithm, nicknamed Norman, after then main character in Alfred Hitchcock's horror movie "Psycho," respond to inkblots used in a Rorschach psychological test, the network said.
The researchers compared Norman's answers to other AI algorithms loaded with standard training, CNN wrote. While those algorithms saw flowers and wedding cakes in the inkblots, Norman saw shooting fatalities and motor deaths, CNN wrote.
"Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image," according to a statement from the MIT Norman team, which included post doctorate student Pinar Yanardag, research manager Manuel Cebrain, and associate professor Iyad Rahwan.
"We trained Norman on image captions from an infamous subreddit that is dedicated to document and observe the disturbing reality of death," the team continued.
That set Norman apart from other algorithms. In one inkblot where a standard algorithm saw "a group of birds sitting on top of a tree branch," Norman's response was "a man is electrocuted and catches to death."
"Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior," the Norman team said in the MIT website.
"So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set," the team continued.
CNN wrote that the same team used deep learning to transform faces from pictures or places to look like they're out of a horror film to see if AI could learn to scare people in 2016. In 2017, researchers created another AI tool to help people better relate to disaster victims, the broadcaster said.
© 2025 Newsmax. All rights reserved.