Andrew Moore, dean of computer science at Carnegie-Mellon University, said recently that researchers are giving up on the prospect of human-like artificial intelligence (AI).
What? In the middle of all this progress we've been making? Well, it turns out the progress the AI field has seen has been more about refining the techniques we've had for years, rather than discovering anything new.
This applies even to our most ambitious AI technologies.
Self-driving cars, for example, have been the buzz in the auto industry, and MIT researchers continue to make improvements, while Japan promises a self-driving car system by 2020. As for knowledge systems, they're becoming more robust in every industry from the medical field to human resources, but there's limits to what they can do.
There's a huge gap between the popular public understanding of AI and what's actually going on.
Pop culture, after all, is riddled with fantasies about human-like machines, be it the HAL9000 from 2001: A Space Odyssey, Ava from the more recent Ex Machina, or GlaDOS, the mechanical mentor from the video game Portal.
Not only are we led to believe AI leads to rebellious, super-intelligent machines with wills and desires of their own, but there's a whole movement out there, including world-class entrepreneurs, insisting it's right around the corner.
But it isn't. To this date, not only do we not know how to make a computer reason like a human, but we also have no idea where to start. Instead, we've gotten better at simulating pseudo-thinking behavior thanks to Moore's Law.
Gordon Moore, co-founder of Intel Corp., famously predicted that computer technology doubles in power every two years. And this is how we have gotten "Black Box AI," the hottest new trend in "smart" machines.
This is the practical, ad-hoc approach to AI, in which we forget about creating an electronic brain that will laugh at jokes and cry at soap operas, instead focusing on throwing all of our processing power into solving real problems.
Human-like AI is the "neat" approach, which has evaded us so far; the "scruffy" approach just cares about results no matter how the computer gets them.
In Black Box AI, we abuse our processing speed so that the computer can find its own solution by trial-and-error.