I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say.
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.