“Intelligence without Reason”, “Intelligence without Representation”, and “Elephants Don’t Play Chess” by Rodney Brooks.
In my view Brooks made the most serious attempt to define a paradigm for AI research. Brooks decried the AI research of the 80s as being plagued by “puzzlitis”—researchers would cook up their own puzzles, and then invent AI systems to solve those problems (often not very well). But why are those problems (e.g. chess) important? Do they really advance our understanding of intelligence? What criterion can be used to decide if a theorem or algorithm is a contribution to AI? Is a string search algorithm a contribution to AI? What about a proof of the four-color theorem?
Brooks made the following bold suggestion: define the problems of relevance to AI to be those problems that real agents encounter in the real world. Thus, to do AI, one builds robots, puts them in the world, and observes the problems they encounter. Then one attempts to solve those real world problems.
Now, I consider this paradigm-proposal to be flawed in many ways. But at least it’s something—it provides a clean definition, and a path by which normal science can proceed.
(A line from “The Big Lebowski” comes to mind: “Say what you will about the tenets of national socialism, Dude, at least it’s an ethos!”)
“Intelligence without Reason”, “Intelligence without Representation”, and “Elephants Don’t Play Chess” by Rodney Brooks.
In my view Brooks made the most serious attempt to define a paradigm for AI research. Brooks decried the AI research of the 80s as being plagued by “puzzlitis”—researchers would cook up their own puzzles, and then invent AI systems to solve those problems (often not very well). But why are those problems (e.g. chess) important? Do they really advance our understanding of intelligence? What criterion can be used to decide if a theorem or algorithm is a contribution to AI? Is a string search algorithm a contribution to AI? What about a proof of the four-color theorem?
Brooks made the following bold suggestion: define the problems of relevance to AI to be those problems that real agents encounter in the real world. Thus, to do AI, one builds robots, puts them in the world, and observes the problems they encounter. Then one attempts to solve those real world problems.
Now, I consider this paradigm-proposal to be flawed in many ways. But at least it’s something—it provides a clean definition, and a path by which normal science can proceed.
(A line from “The Big Lebowski” comes to mind: “Say what you will about the tenets of national socialism, Dude, at least it’s an ethos!”)