Path: EDN Asia >> News Centre >> Industrial/Mil/Aero >> AI going berserk: A case of paranoia or a plausible scenario?
Industrial/Mil/Aero Share print

AI going berserk: A case of paranoia or a plausible scenario?

06 Feb 2015  | Richard Quinnell

Share this page with your friends

Perhaps this is simply the instinctive human fear of the unknown for the threats it may contain. The new always contains opportunity but also an element of risk. Taking a bite out of that thing growing up from the fallen tree might find you a tasty mushroom but could also reveal a poisonous toadstool. And on the whole, we instinctively avoid risk unless circumstances force us to seek opportunity. Perhaps it's no wonder that most people view invention as vaguely threatening. But to fear AI research as a potential path to extinction borders on paranoia. We are nowhere near to even understanding what intelligence is, much less how to create it.

It may well be that the dire predictions cropping up in the news are attempts by massive egos at headline grabbing in order to maintain a prominent position in the public's consciousness. It's equally possible that the reports are inflated, made from out-of-context descriptions of mild speculation rather sincere assessment, in order to generate headlines and stimulate readership. Or, these individuals may simply be using hyperbole to bring attention to an area of technology that needs some thoughtful consideration.

I hope it's the latter. The recent release of an open letter on AI research priorities suggests so. Signed by Hawking and Musk among many others, it provides a much more reasonable look at the issue. It concerns itself not with the dim potential for creating true intelligence but with the more practical and immediate concerns of autonomous systems. Systems such as the Google car, airplane autopilots and self-targeting drones aren't going to rise up against mankind. But they can inadvertently do significant damage if not designed and applied properly. And that potential for damage opens up a whole other discussion regarding things such as liability, the ethics of cost versus safety trade-off decisions, and the like. There are also social issues such as the effects of job replacement to consider.

Intelligent machines are not the kind of extinction threat that the headlines are shouting, at the very least because such machines don't exist and won't for a very long time if ever. Autonomous, intelligent-seeming machines, on the other hand, do represent an opportunity for disaster if designed and applied without careful consideration. That, it seems to me, does merit some concern.


 First Page Previous Page 1 • 2


Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact