I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of
intelligence func (atoms a, environment e) can’t just be
intelligence func (atoms a) which would be simpler
We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute “understanding”—a predictive model of the underlying rules. Even very high intelligence wouldn’t necessarily make a perfect model, given misleading input.
BUT
Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by “all things being equal”. Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.
Intelligence is still a strictly more-is-stronger thing in a predictable universe.
Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can’t get enough food to feed it. Because it will not be around to use its brain and steer the future.
This is why all animals brains aren’t being expanded by evolution.
Evolution makes trade-offs for resources. No good having a better brain you can’t afford to fuel.
“Predictability” as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don’t make the universe unpredictable.
What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won’t flip the ordering of (sufficiently different) minds by intelligence/optimization power. There’s no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.
Why pick chimps particularly? If there any environments where humans don’t survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.
“Equal” is the default—the rules are simpler. Exceptions need explanations.
I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of
intelligence func (atoms a, environment e) can’t just be
intelligence func (atoms a) which would be simpler
We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute “understanding”—a predictive model of the underlying rules. Even very high intelligence wouldn’t necessarily make a perfect model, given misleading input.
BUT
Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by “all things being equal”. Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.
Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can’t get enough food to feed it. Because it will not be around to use its brain and steer the future.
This is why all animals brains aren’t being expanded by evolution.
Evolution makes trade-offs for resources. No good having a better brain you can’t afford to fuel.
“Predictability” as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don’t make the universe unpredictable.
In order to be able to make predictions about the world it is not enough to know just the laws of physics, you have to know the current state.
It is easier to infer the state of some non-intelligences than it is intelligences.
The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won’t flip the ordering of (sufficiently different) minds by intelligence/optimization power. There’s no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.
Why pick chimps particularly? If there any environments where humans don’t survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.