Skip to main content
Tags: artificial intelligence | mit | violent | algorithm

Artificial Intelligence Turns Dark and Violent at MIT

Artificial Intelligence Turns Dark and Violent at MIT
(Edmand C. P. Cheung/Dreamstime.com)

By    |   Friday, 08 June 2018 11:49 AM EDT

Artificial intelligence can turn dark and violent, according to an experiment by MIT, which trained an algorithm using captions and describing graphic images and video about death posted on the popular social media site Reddit, CNN Money reported.

Once loaded with the information, MIT Media Lab researchers had the algorithm, nicknamed Norman, after then main character in Alfred Hitchcock's horror movie "Psycho," respond to inkblots used in a Rorschach psychological test, the network said.

The researchers compared Norman's answers to other AI algorithms loaded with standard training, CNN wrote. While those algorithms saw flowers and wedding cakes in the inkblots, Norman saw shooting fatalities and motor deaths, CNN wrote.

"Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image," according to a statement from the MIT Norman team, which included post doctorate student Pinar Yanardag, research manager Manuel Cebrain, and associate professor Iyad Rahwan.

"We trained Norman on image captions from an infamous subreddit that is dedicated to document and observe the disturbing reality of death," the team continued.

That set Norman apart from other algorithms. In one inkblot where a standard algorithm saw "a group of birds sitting on top of a tree branch," Norman's response was "a man is electrocuted and catches to death."

"Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior," the Norman team said in the MIT website.

"So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set," the team continued.

CNN wrote that the same team used deep learning to transform faces from pictures or places to look like they're out of a horror film to see if AI could learn to scare people in 2016. In 2017, researchers created another AI tool to help people better relate to disaster victims, the broadcaster said.

© 2025 Newsmax. All rights reserved.


TheWire
An experiment by MIT researchers created a dark and violent artificial intelligence algorithm using captions and describing graphic images and video about death posted on the popular social media site Reddit.
artificial intelligence, mit, violent, algorithm
362
2018-49-08
Friday, 08 June 2018 11:49 AM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved