Will we ever make machines that are as smart as ourselves?
Not if engineers insist on building stupid robots, according to a founder of artificial intelligence research.
“AI has been brain-dead since the 1970s,” said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy.
Minsky has spent much of his career studying “commonsense reasoning” — the ability of a computer to grasp the everyday assumptions that human beings take for granted.
Such notions as “water is wet” and “fire is hot” have proved elusive quarry for AI researchers. Minsky accused researchers of giving up on the immense challenge of building a fully autonomous, thinking machine.
AI experts, stung by Minsky’s criticism, are defending their progress.
“The last 15 years have been a very exciting time for AI,” said Stuart Russell, director of the Center for Intelligent Systems at the University of California at Berkeley, and co-author of an AI textbook, Artificial Intelligence: A Modern Approach.
Russell, who described Minsky’s comments as “surprising and disappointing,” said researchers who study learning, vision, robotics and reasoning have made tremendous progress.
AI systems today detect credit-card fraud by learning from earlier transactions. And computer engineers continue to refine speech recognition systems for PCs and face recognition systems for security applications.
These accomplishments have added incrementally to the field and each development contributes to more sophisticated and comprehensive AI systems of the future.
“We’re building systems that detect very subtle patterns in huge amounts of data,” said Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, and president of the American Association for Artificial Intelligence. “The question is, what is the best research strategy to get (us) from where we are today to an integrated, autonomous intelligent agent?”
Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end, Minsky said. So-called “expert systems,” which emulated human expertise within tightly defined subject areas like law and medicine, could match users’ queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old.
“For each different kind of problem,” said Minsky, “the construction of expert systems had to start all over again, because they didn’t accumulate common-sense knowledge.”
Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base.
“Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up,” reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for “pictures of strong, adventurous people” can connect with a relevant image such as a man climbing a cliff.
Even as he acknowledged some progress in AI research, Minsky lamented the state of the lab he founded more than 40 years ago.
“The worst fad has been these stupid little robots,” said Minsky. “Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It’s really shocking.”
“Marvin may have been leveling his criticism at me,” said Rodney Brooks, director of the MIT Artificial Intelligence Lab, who acknowledged that much of the facility’s research is robot-centered.
But Brooks, who invented the automatic vacuum cleaner Roomba, says some advancements in computer vision and other promising forms of machine intelligence are being driven by robotics. The MIT AI Lab, for example, is developing Cog.
Engineers hope the robot system can become self-aware as they teach it to sense its own physical actions and see a causal relationship. Cog may be able to “learn” how to do things.
Brooks pointed out that sensor technology has reached a point where it’s more sophisticated and less expensive, so the robots are sensor-laden now.
“Not all of our intelligence is under our conscious control,” said Brooks. “There are many layers of intelligence that don’t require introspection.” In other words, the emphasis on common-sense reasoning doesn’t apply to some efforts in the AI field.
AI researchers also may be the victims of their own success. The public takes for granted that the Internet is searchable and that people can make airline reservations over the phone — these are examples of AI at work.
“It’s a crazy position to be in,” said Martha Pollack, a professor at the Artificial Intelligence Laboratory at the University of Michigan and executive editor of the Journal of Artificial Intelligence Research.
“As soon as we solve a problem,” said Pollack, “instead of looking at the solution as AI, we come to view it as just another computer system.”