Path: EDN Asia >> News Centre >> Industrial/Mil/Aero >> What makes artificial intelligence dangerous?
Industrial/Mil/Aero Share print

What makes artificial intelligence dangerous?

03 Nov 2015  | Richard Quinnell

Share this page with your friends

Several respected scientists issued a letter warning earlier this year pertaining to the dangers of artificial intelligence (AI). They were worried that we would develop an AI that was capable of adapting and evolving on its own at a rate that would be impossible for us to understand or control. These scientists warned that this could spell the end of mankind.

I think, however, that the real danger of AI is much closer to us than that undefined and likely distant future.

For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don't really know what intelligence is, what's necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn't enough. We humans all have brains, but intelligence varies widely. I don't see how we can artificially create an intelligence when we don't really have a specification to follow.

What we do have is a hazy description of what intelligent behaviour looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behaviour. The results so far have offered some impressive results, but only in narrow application areas. We have chess programs that can beat Grand Masters, interactive programs that are pushing the boundaries of the Turing Test and a supercomputer that can beat human Jeopardy champions. But nothing that can do all those and the thousands of other things a human can.

And even were we able to create something that was truly intelligent, who's to say that such an entity will be malevolent?

I think the dangers of AI are real and will manifest in the near future, however. But they won't arise because of how intelligent the machines are. They'll arise because the machines won't be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.

This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot's attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot's startled reaction will be the wrong one.

1 • 2 Next Page Last Page

Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.

Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming

News | Products | Design Features | Regional Roundup | Tech Impact