No, the event “we survived” is “we (the actual people now considering the anthropic argument and past xrisks) survived”.
Over enough draws, you have P(at least one observer similar to us won the lottery)≈P(at least one observer similar to us won the lottery | the lottery has bad odds)≈1.
So we update the lottery odds based on whether we win or not; we update the danger odds based on whether we live. If we die, we alas don’t get to do much updating (though note that we can consider hypothetical with bets that pay out to surviving relatives, or have a chance of reviving the human race, or whatever, to get the updates we think would be correct in the worlds where we don’t exist).
Thank you, I understand this now (I found it useful to imagine code that is being invoked many times and is terminated after a random duration; and reflect on how the agent implemented by the code should update as time goes by).
I guess I should be overall more optimistic now :)
No, the event “we survived” is “we (the actual people now considering the anthropic argument and past xrisks) survived”.
Over enough draws, you have P(at least one observer similar to us won the lottery)≈ P(at least one observer similar to us won the lottery | the lottery has bad odds) ≈1.
So we update the lottery odds based on whether we win or not; we update the danger odds based on whether we live. If we die, we alas don’t get to do much updating (though note that we can consider hypothetical with bets that pay out to surviving relatives, or have a chance of reviving the human race, or whatever, to get the updates we think would be correct in the worlds where we don’t exist).
Thank you, I understand this now (I found it useful to imagine code that is being invoked many times and is terminated after a random duration; and reflect on how the agent implemented by the code should update as time goes by).
I guess I should be overall more optimistic now :)