Therefore A1 would force us to conclude that the safe and the dangerous worlds have exactly the same level of risk!
Similar problems arise if we try and use weaker versions of A1 - maybe our survival is some evidence, just not strong evidence. But Bayes will still hit us, and force us to change our values of terms like P( we survived | dangerous ).
I’m confused by this. The event “we survived” here is actually the event “at least one observer similar to us survived”, right? (for some definition of “similar”). If the number of planets on which creatures similar-to-us evolve is sufficiently large, we get: P(at least one observer similar to us survived)≈P(at least one observer similar to us survived | dangerous)≈1
No, the event “we survived” is “we (the actual people now considering the anthropic argument and past xrisks) survived”.
Over enough draws, you have P(at least one observer similar to us won the lottery)≈P(at least one observer similar to us won the lottery | the lottery has bad odds)≈1.
So we update the lottery odds based on whether we win or not; we update the danger odds based on whether we live. If we die, we alas don’t get to do much updating (though note that we can consider hypothetical with bets that pay out to surviving relatives, or have a chance of reviving the human race, or whatever, to get the updates we think would be correct in the worlds where we don’t exist).
Thank you, I understand this now (I found it useful to imagine code that is being invoked many times and is terminated after a random duration; and reflect on how the agent implemented by the code should update as time goes by).
I guess I should be overall more optimistic now :)
I’m confused by this. The event “we survived” here is actually the event “at least one observer similar to us survived”, right? (for some definition of “similar”).
If the number of planets on which creatures similar-to-us evolve is sufficiently large, we get:
P(at least one observer similar to us survived)≈P(at least one observer similar to us survived | dangerous)≈1
No, the event “we survived” is “we (the actual people now considering the anthropic argument and past xrisks) survived”.
Over enough draws, you have P(at least one observer similar to us won the lottery)≈ P(at least one observer similar to us won the lottery | the lottery has bad odds) ≈1.
So we update the lottery odds based on whether we win or not; we update the danger odds based on whether we live. If we die, we alas don’t get to do much updating (though note that we can consider hypothetical with bets that pay out to surviving relatives, or have a chance of reviving the human race, or whatever, to get the updates we think would be correct in the worlds where we don’t exist).
Thank you, I understand this now (I found it useful to imagine code that is being invoked many times and is terminated after a random duration; and reflect on how the agent implemented by the code should update as time goes by).
I guess I should be overall more optimistic now :)