To me it seems obvious from looking at the history of the earth that the world changes and what might be effective at one point is not necessarily so in the future.
Is it up to me to show that “all things aren’t equal”, or is it up to you to show that “all things are equal”? Whose opinion should be the default position that needs to be refuted?
I think I have given sufficient real world examples to at least make further thought into this matter worthwhile. Probably we should both try and argue the others side or something.
Well, some things change, but the examples we have of general intelligence are all cross-domain enough to handle such change. Human beings are more intelligent than chimps; no plausible change in the environment that leaves both humans and chimps alive will result in chimps developing more optimization power than humans. The scientific community in the modern world does a better job of focusing human intelligence on problem-solving than does a hunter-gatherer religion; no change in the environment that leaves our scientists alive will allow our technology to be surpassed by the combined forces of animist tribes from the African jungles.
Repeated asteroid strikes that kill all multicellular creatures would be an example of an environmental change that prevented (or at least delayed) an intelligence explosion.
In a benign environment, nature appears to favour collecting computing elements together. The enormous modern data centres are the most recent example from a long history of intelligence deployments.
I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of
intelligence func (atoms a, environment e) can’t just be
intelligence func (atoms a) which would be simpler
We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute “understanding”—a predictive model of the underlying rules. Even very high intelligence wouldn’t necessarily make a perfect model, given misleading input.
BUT
Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by “all things being equal”. Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.
Intelligence is still a strictly more-is-stronger thing in a predictable universe.
Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can’t get enough food to feed it. Because it will not be around to use its brain and steer the future.
This is why all animals brains aren’t being expanded by evolution.
Evolution makes trade-offs for resources. No good having a better brain you can’t afford to fuel.
“Predictability” as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don’t make the universe unpredictable.
What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won’t flip the ordering of (sufficiently different) minds by intelligence/optimization power. There’s no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.
Why pick chimps particularly? If there any environments where humans don’t survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.
To me it seems obvious from looking at the history of the earth that the world changes and what might be effective at one point is not necessarily so in the future.
Is it up to me to show that “all things aren’t equal”, or is it up to you to show that “all things are equal”? Whose opinion should be the default position that needs to be refuted?
I think I have given sufficient real world examples to at least make further thought into this matter worthwhile. Probably we should both try and argue the others side or something.
Well, some things change, but the examples we have of general intelligence are all cross-domain enough to handle such change. Human beings are more intelligent than chimps; no plausible change in the environment that leaves both humans and chimps alive will result in chimps developing more optimization power than humans. The scientific community in the modern world does a better job of focusing human intelligence on problem-solving than does a hunter-gatherer religion; no change in the environment that leaves our scientists alive will allow our technology to be surpassed by the combined forces of animist tribes from the African jungles.
Repeated asteroid strikes that kill all multicellular creatures would be an example of an environmental change that prevented (or at least delayed) an intelligence explosion.
In a benign environment, nature appears to favour collecting computing elements together. The enormous modern data centres are the most recent example from a long history of intelligence deployments.
“Equal” is the default—the rules are simpler. Exceptions need explanations.
I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of
intelligence func (atoms a, environment e) can’t just be
intelligence func (atoms a) which would be simpler
We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.
Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute “understanding”—a predictive model of the underlying rules. Even very high intelligence wouldn’t necessarily make a perfect model, given misleading input.
BUT
Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by “all things being equal”. Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.
Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can’t get enough food to feed it. Because it will not be around to use its brain and steer the future.
This is why all animals brains aren’t being expanded by evolution.
Evolution makes trade-offs for resources. No good having a better brain you can’t afford to fuel.
“Predictability” as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don’t make the universe unpredictable.
In order to be able to make predictions about the world it is not enough to know just the laws of physics, you have to know the current state.
It is easier to infer the state of some non-intelligences than it is intelligences.
The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won’t flip the ordering of (sufficiently different) minds by intelligence/optimization power. There’s no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.
Why pick chimps particularly? If there any environments where humans don’t survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.