2013-12-17

Co.Labs

What Does Artificial Intelligence Really Mean, Anyway?

We’ve been using the same definition for Artificial Intelligence for over 50 years. Has it been wrong the whole time?



The great promise--and great fear--of Artificial Intelligence has always been that some day, computers would be able to mimic the way our brains work. However, after years of progress, AI isn’t just a long way from HAL 9000, it has gone in an entirely different direction. Some of the biggest tech companies in the world are beginning to implement AI in some form, and it looks nothing like we thought it would.

In a piece for the BBC’s website, writer Tom Chatfield examines the recent AI initiatives from companies like Facebook--which announced last week that it would be partnering with NYU to build an artificial intelligence team that hopes to develop a computer that will develop insights from enormous data sets--and argued that such developments are completely contrary to the classic definition of AI as a field.

Chatfield’s argument is centered on a feature in the Atlantic on cognitive scientist Douglas Hofstadter, who believes that what Facebook is doing, along with other recent advances like IBM’s Watson, doesn’t qualify as "intelligence." Writes Chatfield:

“For Hoftstadter, the label “intelligence” is simply inappropriate for describing insights drawn by brute computing power from massive data sets – because, from his perspective, the fact that results appear smart is irrelevant if the process underlying them bears no resemblance to intelligent thought. As he put it to interviewer James Somers, ‘I don’t want to be involved in passing off some fancy program’s behaviour for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.’”

To that end, Chatfield argues that we’ve created something entirely different. Instead of machines that think like humans, we now have machines that think in an entirely different, perhaps even alien, way. Continuing to shoehorn them into replicating our natural thought processes could be limiting.

Some are inclined to agree. Writing for the MIT Technology Review, Tom Simonite reiterates just how bad computers are at tasks that are easy for brains, like image recognition. Simonite attributes this to the way we’ve been building computer chips. Namely, that it’s going to be impossible for computers to imitate non-linear thought processes if we continue to use hardware that’s designed to execute linear sequences of instructions--the CPU-RAM design called the Von Neumann architecture. Instead, an answer may lie with neuromorphic chips like IBM’s Synapse, which are specifically designed to work the way our brains do.

The problem, Simonite writes, will be making them work on a larger scale. “It is still unclear whether scaling up these chips will produce machines with more sophisticated brainlike faculties. And some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities.”

As it turns out, copying biology is really damn hard. While scientists like Hofstadter prop up the platonic ideal of AI as a computer that functions the same way our brains do, perhaps the Deep Learning approach embraced by Google is the means by which we get there. Maybe you don’t need neuromorphic chips to build a real-life HAL. Maybe you just need lots and lots of data.

[Image: Flickr user Ted Eytan]