I disagree. Compared to UFAIs, FAIs must by definition have a more limited range of options. Why would the difference be negligible?
Even if that were true (which I don’t see: like FAIs, uFAIs will have goals they are trying to maximize, and their options will be limited to those not in conflict with those goals): Why on Earth would this difference take the form of “given millions of years, you can’t colonize the galaxy”? And moreover, why would it reliably have taken this form for every single civilization that has arisen in the past? We’d certainly expect an FAI built by humanity to go to the stars!
I’m not saying that it can’t, I’m saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn’t seem well-supported. If there are nearby FAIs you have the problem of theodicy.
I should note that I’m not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.
Even if that were true (which I don’t see: like FAIs, uFAIs will have goals they are trying to maximize, and their options will be limited to those not in conflict with those goals): Why on Earth would this difference take the form of “given millions of years, you can’t colonize the galaxy”? And moreover, why would it reliably have taken this form for every single civilization that has arisen in the past? We’d certainly expect an FAI built by humanity to go to the stars!
I’m not saying that it can’t, I’m saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.
Overall, I believe it is more likely we’re indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.
This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn’t seem well-supported. If there are nearby FAIs you have the problem of theodicy.
I should note that I’m not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.