UFAI is not strongly motivated to sim us in large numbers
This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI, and we can’t make strong assumptions about what it is or is not motivated to do—there are lots of ways for Friendliness to fail that don’t involve paperclips.
This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI,
Irrelevant. The design space of all programs is infinite—do you somehow think that the set of programs that humans create is a random sample from the set of all programs?. The size of the design space has absolutely nothing whatsoever to do with any realistic actual probability distribution over that space.
we can’t make strong assumptions about what it is or is not motivated to do
Of course we can—because UFAI is defined as superintelligence that doesn’t care about humans!
Of course we can—because UFAI is defined as superintelligence that doesn’t care about humans!
For a certain narrow sense of “care”, yes—but it’s a sense narrow enough that it doesn’t exclude a motivation to sim humans, or give us any good grounds for probabilistic reasoning about whether a Friendly intelligence is more likely to simulate us. So narrow, in fact, that it’s not actually a very strong assumption, if by strength we mean something like bits of specification.
narrow enough that it doesn’t exclude a motivation to sim humans
Most UFAI will have convergent instrumental reasons to sim at least some humans, just as a component of simulating the universal in general towards better prediction/understanding.
FAI has that same small motivation plus the more direct end goal of creating enormous numbers of sims to satisfy human’s highly convergent desire for an afterlife to exist. The creation of an immortal afterlife is the single most important defining characteristic of FAI. Humans have spent a huge amount of time thinking and debating about what kinds of gods should/could exist, and afterlife/immortality is the number one concern—and transhumanists are certainly no exception.
This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI, and we can’t make strong assumptions about what it is or is not motivated to do—there are lots of ways for Friendliness to fail that don’t involve paperclips.
Irrelevant. The design space of all programs is infinite—do you somehow think that the set of programs that humans create is a random sample from the set of all programs?. The size of the design space has absolutely nothing whatsoever to do with any realistic actual probability distribution over that space.
Of course we can—because UFAI is defined as superintelligence that doesn’t care about humans!
For a certain narrow sense of “care”, yes—but it’s a sense narrow enough that it doesn’t exclude a motivation to sim humans, or give us any good grounds for probabilistic reasoning about whether a Friendly intelligence is more likely to simulate us. So narrow, in fact, that it’s not actually a very strong assumption, if by strength we mean something like bits of specification.
Most UFAI will have convergent instrumental reasons to sim at least some humans, just as a component of simulating the universal in general towards better prediction/understanding.
FAI has that same small motivation plus the more direct end goal of creating enormous numbers of sims to satisfy human’s highly convergent desire for an afterlife to exist. The creation of an immortal afterlife is the single most important defining characteristic of FAI. Humans have spent a huge amount of time thinking and debating about what kinds of gods should/could exist, and afterlife/immortality is the number one concern—and transhumanists are certainly no exception.