So people who do work on AI are not ignorant of the arguments that you make. The common association of of AI with robotics, while not an exclusive arrangement, is one that is carefully thought out and not just a manifestation of anthropic bias. Of course there’s the point that robotic control requires a fair amount of AI—it is neurons that control our muscles—and so the fields naturally go together. But a more fundamental aspect you might not be considering is googable with the phrase “embodied intelligence”:
It is likely the case our own higher intelligence is shaped by the fact that we are a minimally biased general intelligence (our brain’s neocortex) layered on top of a perceptual input and robotic control system (our senses and body). So the theory goes that much of our learning heuristics and even moral instincts are developed, as babies, by learning how to control our own bodies, then generalizing that knowledge. If this is true, there is a direct connection between embodiment and both general intelligence and moral agents, at least in people. This is part of why, for example, Ben Goertzel’s project is to create an “artificial toddler” using Hanson Robotics and OpenCog, or why AGI researchers at MIT have been so fascinated with making robots play with blocks.
I don’t think the connection between robotics and AI is so tenuous as you make out. An intelligence need not be embedded.. but that raises the bar significantly, as now a lot of the priors have to be built-in rather than learned. Easier to just give it effectors and sense organs and learn that on its own.
This is probably more contentious. But I believe that the concept of “intelligence” is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any “embodied intelligence”.
I would rather stress two key properties of an algorithm: the quality of the algorithm’s world model and its (long-term) planning capabilities. It seems to me (but maybe I’m wrong) that “embodied intelligence” is not very relevant to world model inference and planning capabilities.
Typically, Legg-Hutter intelligence does not seem to require any “embodied intelligence”.
Don’t make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.
So people who do work on AI are not ignorant of the arguments that you make. The common association of of AI with robotics, while not an exclusive arrangement, is one that is carefully thought out and not just a manifestation of anthropic bias. Of course there’s the point that robotic control requires a fair amount of AI—it is neurons that control our muscles—and so the fields naturally go together. But a more fundamental aspect you might not be considering is googable with the phrase “embodied intelligence”:
It is likely the case our own higher intelligence is shaped by the fact that we are a minimally biased general intelligence (our brain’s neocortex) layered on top of a perceptual input and robotic control system (our senses and body). So the theory goes that much of our learning heuristics and even moral instincts are developed, as babies, by learning how to control our own bodies, then generalizing that knowledge. If this is true, there is a direct connection between embodiment and both general intelligence and moral agents, at least in people. This is part of why, for example, Ben Goertzel’s project is to create an “artificial toddler” using Hanson Robotics and OpenCog, or why AGI researchers at MIT have been so fascinated with making robots play with blocks.
I don’t think the connection between robotics and AI is so tenuous as you make out. An intelligence need not be embedded.. but that raises the bar significantly, as now a lot of the priors have to be built-in rather than learned. Easier to just give it effectors and sense organs and learn that on its own.
This is probably more contentious. But I believe that the concept of “intelligence” is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any “embodied intelligence”.
I would rather stress two key properties of an algorithm: the quality of the algorithm’s world model and its (long-term) planning capabilities. It seems to me (but maybe I’m wrong) that “embodied intelligence” is not very relevant to world model inference and planning capabilities.
Don’t make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.