Some Fear We Are Developing Computers That Are Too Smart

By    |   Tuesday, 12 May 2015 02:58 PM EDT ET

Is mankind in imminent danger from artificial intelligence (AI), a "Terminator" world in which super-smart, self-replicating robots will determine that humanity is unnecessary and destroy us all?

Brilliant technology innovators like Tesla founder Elon Musk, Microsoft's Bill Gates and even genius physicist Stephen Hawking, who said, "I think the development of full artificial intelligence could spell the end of the human race," fear that we may be developing computers that are too smart for our own good, the Wall Street Journal notes.

With the growth of Apple's Siri, Amazon's Alexa, IBM's Watson and Google's Brain, more advanced machines that can reason, make decisions, and act independently can't be far behind.

The Chinese firm Baidu's Minwa supercomputer already outperforms humans in identifying photos, International Business Times reports.

Japan's Namiki Laboratory has developed a sword-fighting robot that can anticipate and repel an attack, and the U.S. government is developing unmanned computerized weapons, the New York Post notes.

"If we end up creating a super-intelligent AI that has its own interests that are not aligned with ours, we could be creating our own doom," Tim Dean, science and technology editor at The Conversation website, told the Post.

However, according to a panel of high-tech experts organized by the Journal, with proper care in technological development, man has nothing to fear from brainy robots at least, not for quite awhile.

The Journal asked Jaan Tallinn, co-founder of Skype, Guruduth S. Banavar, IBM's vice president of cognitive computing, and the University of Padua's Francesca Rossi for their take on the dangers posed to our future by runaway AI.

"Fueled by science fiction novels and movies, popular treatment of this topic far too often has created a false sense of conflict between humans and machines," Banavar said, noting that while computers already are smarter than humans when it comes to sifting through massive amounts of data, they don't perform as well at "common-sense reasoning, asking brilliant questions and thinking out of the box," and perhaps never will.

"My personal view is that the sensationalism and speculation around general-purpose, human-level machine intelligence is little more than good entertainment," Banavar told the Journal.

Tallinn brought up the "superintelligence control problem," which he defines as a machine that, if improperly programmed, could not be turned off, which "could be a serious problem."

He also advises proceeding carefully, saying, "Should we find ourselves in a position where we need to specify to an AI, in program code, 'Go on from here and build a great future for us,' we’d better be very certain we know how reality works."

Rossi noted that computers already are making crucial decisions, such as those in computerized online trading, self-driving vehicles, airplane auto-pilots, autonomous weaponry, and medical diagnosis, largely with success.

However, she added, "We need to assess their potential dangers in the narrow domains where they will function and make them safe, friendly and aligned with human values.
This is not an easy task, since even humans are not rationally following their principles most of the time," the Journal reported.

Computers' most telling danger may be in the massive elimination of jobs, and Tallinn warned, "In the long run, we should think about how to organize society around something other than near-universal employment."

Rossi told the Journal, "I believe we can design narrowly intelligent AI machines in a way that most undesired effects are eliminated. We need to align their values with ours and equip them with guiding principles and priorities, as well as conflict-resolution abilities that match ours.

"If we do that in narrowly intelligent machines, they will be the building blocks of general AI systems that will be safe enough to not threaten humanity."

© 2025 Newsmax. All rights reserved.


US
Is mankind in imminent danger from artificial intelligence (AI), a Terminator world in which super-smart, self-replicating robots will determine that humanity is unnecessary and destroy us all?
computers, artificial intelligence, robots, Bill Gates
612
2015-58-12
Tuesday, 12 May 2015 02:58 PM
Newsmax Media, Inc.

View on Newsmax