I half-agree with both of you. I do think Hanson’s selection pressure paper is a useful first approximation, but it’s not clear that the reachable universe is big enough that small deviations from the optimal strategy will actually lead to big differences in amount of resources controlled. And as I gestured towards in the final section of the story, “helping” can be very cheap, if it just involves storing their mind until you’ve finished expanding.
But I don’t think that the example of animals demonstrates this point very well, for two reasons. Firstly, in the long term we’ll be optimizing these probes way harder than animals were optimized.
Secondly, a lot of the weird behaviors of animals are a result of needing to compete directly against each other (e.g. by eating each other, or mating with each other). But I’m picturing almost all competition between probes happening indirectly, via racing to the stars. So I think they’ll look more directly optimized for speed. (For example, an altruistic probe in direct competition would others would need ways of figuring out when its altruism was being exploited, and then others would try to figure out how to fool it, until the whole system became very unwieldy. By contrast, if the altruism just consists of “in colonizing a solar system I’ll take a 1% efficiency hit by only creating non-conscious workers” then that’s much more direct.)
I half-agree with both of you. I do think Hanson’s selection pressure paper is a useful first approximation, but it’s not clear that the reachable universe is big enough that small deviations from the optimal strategy will actually lead to big differences in amount of resources controlled. And as I gestured towards in the final section of the story, “helping” can be very cheap, if it just involves storing their mind until you’ve finished expanding.
But I don’t think that the example of animals demonstrates this point very well, for two reasons. Firstly, in the long term we’ll be optimizing these probes way harder than animals were optimized.
Secondly, a lot of the weird behaviors of animals are a result of needing to compete directly against each other (e.g. by eating each other, or mating with each other). But I’m picturing almost all competition between probes happening indirectly, via racing to the stars. So I think they’ll look more directly optimized for speed. (For example, an altruistic probe in direct competition would others would need ways of figuring out when its altruism was being exploited, and then others would try to figure out how to fool it, until the whole system became very unwieldy. By contrast, if the altruism just consists of “in colonizing a solar system I’ll take a 1% efficiency hit by only creating non-conscious workers” then that’s much more direct.)