Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That’s a tiny, tiny fraction of the set of all computable environments.
A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world’s problems.
Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.
Since we have to choose, I want it optimal for this universe, for purposes we deem good.
A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.
A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents—i.e. you can pay us to solve an enormous range of different problems.
Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents—on the other hand—fail completely on problems outside their domain.
FWIW, I’m thinking of intelligence this way:
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
http://www.vetta.org/definitions-of-intelligence/
Nothing to do with humans, really.
Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That’s a tiny, tiny fraction of the set of all computable environments.
A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world’s problems.
Oracles for uncomputable problems tend to be like that...
Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.
Since we have to choose, I want it optimal for this universe, for purposes we deem good.
A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.
A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents—i.e. you can pay us to solve an enormous range of different problems.
Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents—on the other hand—fail completely on problems outside their domain.