Wednesday, October 22, 2014

Isaacson on a New Paradigm for Artificial Intelligence

In the September 27-28, 2014, edition of The Wall Street Journal, Walter Isaacson (The Innovators) talks about code cracker Alan Turing, thinking machines, and a new paradigm for artificial intelligence. Excerpts:
These questions [about free will] came together in a paper, "Computing Machinery and Intelligence," that Turing published in 1950. With a schoolboy's sense of fun, he invented a game -- one that is still being played and debated -- to give meaning to the question, "Can machines think?" He proposed a purely empirical definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine isn't "thinking." ...

At Applied Minds near Los Angeles, you can get an exciting look at how a robot is being programmed to maneuver, but it soon becomes apparent that it has trouble navigating an unfamiliar room, picking up a crayon and writing its name. A visit to Nuance Communications near Boston shows the wondrous advances in speech-recognition technologies that underpin Siri and other systems, but it is also apparent to anyone using Siri that you still can't have a truly meaningful conversation with a computer, except in a fantasy movie. A visit to the New York City police command system in Manhattan reveals how computers scan thousands of feeds from surveillance cameras as part of a Domain Awareness System, but the system still cannot reliably identify your mother's face in a crowd.

All of these tasks have one thing in common: Even a 4-year-old can do them.

Perhaps the latest round of reports about neural-network breakthroughs does in fact mean that, in 20 years, there will be machines that think like humans. But there is another possibility, the one that Ada Lovelace envisioned: that the combined talents of humans and computers, when working together in partnership and symbiosis, will indefinitely be more creative than any computer working alone.
Read the whole thing (and if the Journal's Web site wants you to subscribe, remember that Google is your friend). For years, the idea of the sentient computer has permeated science fiction. (Think William Gibson's Wintermute or Arthur C. Clarke's Hal 9000.) In fact, I'd argue that it's moved from trope to cliché. What's more, few genre writers have ever stopped to critically consider the concomitant concepts of such a view, such as how the increasing physical complexity of the material brain can ever result in an immaterial mind or why focused thought sometimes seems to reshape gray matter. Here's to hoping Isaacson's paradigm of computer/human partnership gains popularity not only for creativity's sake, but so that more people can start considering big questions.

(Picture: CC 2011 by Saad Faruque)

No comments: