I’m inside-view fairly confident that Bob should be putting a probability of 0.01% on surviving conditional on many worlds being true, but it seems possible I’m missing some crucial considerations having to do with observer selection stuff in general, so I’ll phrase the rest of this as more of a question.
What’s wrong with saying that Bob should put a probability of 0.01% of surviving conditional on many-worlds being true – doesn’t this just follow from the usual way that a many-worlder would put probabilities on things, or at least the simplest way for doing so (i.e. not post-normalizing only across the worlds in which you survive)? I’m pretty sure that the usual picture of Bayesianism as having a big (weighted) set of possible worlds in your head and, upon encountering evidence, discarding the ones which you found out you were not in, also motivates putting a probability of 0.01% on surviving conditional on many-worlds. (I’m assuming that for a many-worlder, weights on worlds are given by squared amplitudes or whatever.)
This contradicts a version of the conservation of expected evidence in which you only average over outcomes in which you survive (even in cases where you don’t survive in all outcomes), but that version seems wrong anyway, with Leslie’s firing squad seeming like an obvious counterexample to me, https://plato.stanford.edu/entries/fine-tuning/#AnthObje .
(By the way, I’m pretty sure the position I outline is compatible with changing usual forecasting procedures in the presence of observer selection effects, in cases where secondary evidence which does not kill us is available. E.g. one can probably still justify [looking at the base rate of near misses to understand the probability of nuclear war instead of relying solely on the observed rate of nuclear war itself].)
I’m inside-view fairly confident that Bob should be putting a probability of 0.01% on surviving conditional on many worlds being true, but it seems possible I’m missing some crucial considerations having to do with observer selection stuff in general, so I’ll phrase the rest of this as more of a question.
What’s wrong with saying that Bob should put a probability of 0.01% of surviving conditional on many-worlds being true – doesn’t this just follow from the usual way that a many-worlder would put probabilities on things, or at least the simplest way for doing so (i.e. not post-normalizing only across the worlds in which you survive)? I’m pretty sure that the usual picture of Bayesianism as having a big (weighted) set of possible worlds in your head and, upon encountering evidence, discarding the ones which you found out you were not in, also motivates putting a probability of 0.01% on surviving conditional on many-worlds. (I’m assuming that for a many-worlder, weights on worlds are given by squared amplitudes or whatever.)
This contradicts a version of the conservation of expected evidence in which you only average over outcomes in which you survive (even in cases where you don’t survive in all outcomes), but that version seems wrong anyway, with Leslie’s firing squad seeming like an obvious counterexample to me, https://plato.stanford.edu/entries/fine-tuning/#AnthObje .
(By the way, I’m pretty sure the position I outline is compatible with changing usual forecasting procedures in the presence of observer selection effects, in cases where secondary evidence which does not kill us is available. E.g. one can probably still justify [looking at the base rate of near misses to understand the probability of nuclear war instead of relying solely on the observed rate of nuclear war itself].)