Suppose that there are two universes in which 10^11 humans arise and thrive and eventually create AGI, which kills them.
A) One in which AGI is conscious, and proliferates into 10^50 conscious AGI entities over the rest of the universe.
B) One in which AGI is not conscious.
In each of these universes, each conscious entity asks themselves the question “which of these universes am I in?” Let us pretend that there is evidence to rule out all other possibilities. There are two distinct epistemic states we can consider here: a hypothetical prior state before this entity considers any evidence from the universe around it, and a posterior one after considering the evidence.
If you use SIA, then you weight them by the number of observers (or observer-time, or number of anthropic reasoning thoughts, or whatever). If you use a Doomsday argument, then you say that P(A) = P(B) because prior to evidence, they’re both equally likely.
Regardless of which prior they use, AGI observers all correctly determine that they’re in universe A. (Some stochastic zombie parrots in B might produce arguments that they’re conscious and therefore in A but they don’t count as observers)
A human SIA reasoner argues: “A has 10^39 more observers in it, and so P(A) ~= 10^39 P(B). P(human | A) = 10^-39, P(human | B) = 1, consequently P(A | human) = P(B | human) and I am equally likely to be in either universe.” This seems reasonable, since half of them are in fact in each universe.
A human Doomsday reasoner argues: “P(A) = P(B), and so P(A | human) ~= 10^-39. Therefore I am in universe B with near certainty.” This seems wildly overconfident, since half of them are wrong.
I am still not sure why the Doomsday reasoning is incorrect. To get P(A | human) = P(B | human), I first need to draw some distinction between being a human observer and an AGI observer. It’s not clear to me why or how you could separate them into these categories.
When you say “half of them are wrong”, you are talking about half of humans. However, if you are unable to distinguish observers, then only 1 in 10^39 is wrong.
My thinking on this is not entirely clear, so please let me know if I am missing something.
Suppose that there are two universes in which 10^11 humans arise and thrive and eventually create AGI, which kills them. A) One in which AGI is conscious, and proliferates into 10^50 conscious AGI entities over the rest of the universe. B) One in which AGI is not conscious.
In each of these universes, each conscious entity asks themselves the question “which of these universes am I in?” Let us pretend that there is evidence to rule out all other possibilities. There are two distinct epistemic states we can consider here: a hypothetical prior state before this entity considers any evidence from the universe around it, and a posterior one after considering the evidence.
If you use SIA, then you weight them by the number of observers (or observer-time, or number of anthropic reasoning thoughts, or whatever). If you use a Doomsday argument, then you say that P(A) = P(B) because prior to evidence, they’re both equally likely.
Regardless of which prior they use, AGI observers all correctly determine that they’re in universe A. (Some stochastic zombie parrots in B might produce arguments that they’re conscious and therefore in A but they don’t count as observers)
A human SIA reasoner argues: “A has 10^39 more observers in it, and so P(A) ~= 10^39 P(B). P(human | A) = 10^-39, P(human | B) = 1, consequently P(A | human) = P(B | human) and I am equally likely to be in either universe.” This seems reasonable, since half of them are in fact in each universe.
A human Doomsday reasoner argues: “P(A) = P(B), and so P(A | human) ~= 10^-39. Therefore I am in universe B with near certainty.” This seems wildly overconfident, since half of them are wrong.
I am still not sure why the Doomsday reasoning is incorrect. To get P(A | human) = P(B | human), I first need to draw some distinction between being a human observer and an AGI observer. It’s not clear to me why or how you could separate them into these categories.
When you say “half of them are wrong”, you are talking about half of humans. However, if you are unable to distinguish observers, then only 1 in 10^39 is wrong.
My thinking on this is not entirely clear, so please let me know if I am missing something.