Superintelligent agents can be expected to evolve out of systems that evolved by random mutations.
The systems that they evolved from can be expected to have goals that result in tracking down and utililising every available source of negentropy.
They will build superintelligent agents to help them attain these goals—and so the resulting systems are likely to be even better at tracking down and utilising negentropy than the original systems were—since they will pursue the same ends with greater competence.
Systems with radically different goals are not logically impossible. I call those “handicapped superintelligences”. If they ever meet any other agents, it seems that they will be at a disadvantage—since nature disapproves of deviations from god’s utility function.
Living systems maximise entropy. If the system dies out, it fails in doing that, and entropy increases more slowly. So: self-perpetuation is pretty much an automatic corollary of long-term entropy-maximisation. The best way to flatten those energy gradients is often to have lots of kids—and to let them help you.
Tim: these civilizations would be superintelligences, they would not behave in a way that is typical of dumb life.
But I agree, I find this argument somewhat weak.
Superintelligent agents can be expected to evolve out of systems that evolved by random mutations.
The systems that they evolved from can be expected to have goals that result in tracking down and utililising every available source of negentropy.
They will build superintelligent agents to help them attain these goals—and so the resulting systems are likely to be even better at tracking down and utilising negentropy than the original systems were—since they will pursue the same ends with greater competence.
Systems with radically different goals are not logically impossible. I call those “handicapped superintelligences”. If they ever meet any other agents, it seems that they will be at a disadvantage—since nature disapproves of deviations from god’s utility function.
More important than negentropy is continued existence. If the simulation gets shut down, you’ve already lost all you already have.
Living systems maximise entropy. If the system dies out, it fails in doing that, and entropy increases more slowly. So: self-perpetuation is pretty much an automatic corollary of long-term entropy-maximisation. The best way to flatten those energy gradients is often to have lots of kids—and to let them help you.