Re: “So intelligence, if it means successful, purposeful manipulation of the environment, does rely heavily on the particulars of our bodies, in a way that powered flight does not.”
Natural selection shaped wings for roughly as long as it has shaped brains. They too are an accumulated product of millions of years of ancestral success stories. Information about both is transmitted via the genome. If there is a point of dis-analogy here between wings and brains, it is not obvious.
Okay, let me explain it this way: when people refer to intelligence, a large part of what they have in mind is the knowedge that we (tacitly) have about a specific environment. Therefore, our bodies are highly informative about a large part (though certainly not the entirety!) of what is meant by intelligence.
In contrast, the only commonality with birds that is desired in the goal “powered human flight” is … the flight thing. Birds have a solution, but they do not define the solution.
In both cases, I agree, the solution afforded by the biological system (bird or human) is not strictly necessary for the goal (flight or intelligence). And I agree that once certain insights are achieved (the workings of aerodynamic lift or the tacit knowledge humans have [such as the assumptions used in interpreting retinal images]), they can be implemented differently from how the biological system does it.
However, for a robot to match the utility of a human e.g. butler, it must know things specific to humans (like what the meanings of words are, given a particular social context), not just intelligence-related things in general, like how to infer causal maps from raw data.
Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That’s a tiny, tiny fraction of the set of all computable environments.
A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world’s problems.
Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.
Since we have to choose, I want it optimal for this universe, for purposes we deem good.
A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.
A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents—i.e. you can pay us to solve an enormous range of different problems.
Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents—on the other hand—fail completely on problems outside their domain.
Re: “So intelligence, if it means successful, purposeful manipulation of the environment, does rely heavily on the particulars of our bodies, in a way that powered flight does not.”
Natural selection shaped wings for roughly as long as it has shaped brains. They too are an accumulated product of millions of years of ancestral success stories. Information about both is transmitted via the genome. If there is a point of dis-analogy here between wings and brains, it is not obvious.
Okay, let me explain it this way: when people refer to intelligence, a large part of what they have in mind is the knowedge that we (tacitly) have about a specific environment. Therefore, our bodies are highly informative about a large part (though certainly not the entirety!) of what is meant by intelligence.
In contrast, the only commonality with birds that is desired in the goal “powered human flight” is … the flight thing. Birds have a solution, but they do not define the solution.
In both cases, I agree, the solution afforded by the biological system (bird or human) is not strictly necessary for the goal (flight or intelligence). And I agree that once certain insights are achieved (the workings of aerodynamic lift or the tacit knowledge humans have [such as the assumptions used in interpreting retinal images]), they can be implemented differently from how the biological system does it.
However, for a robot to match the utility of a human e.g. butler, it must know things specific to humans (like what the meanings of words are, given a particular social context), not just intelligence-related things in general, like how to infer causal maps from raw data.
FWIW, I’m thinking of intelligence this way:
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
http://www.vetta.org/definitions-of-intelligence/
Nothing to do with humans, really.
Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That’s a tiny, tiny fraction of the set of all computable environments.
A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world’s problems.
Oracles for uncomputable problems tend to be like that...
Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.
Since we have to choose, I want it optimal for this universe, for purposes we deem good.
A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.
A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents—i.e. you can pay us to solve an enormous range of different problems.
Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents—on the other hand—fail completely on problems outside their domain.