I’m not saying that it can’t, I’m saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn’t seem well-supported. If there are nearby FAIs you have the problem of theodicy.
I should note that I’m not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.
I’m not saying that it can’t, I’m saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn’t seem well-supported. If there are nearby FAIs you have the problem of theodicy.
I should note that I’m not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.