I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn’t make sense to put a probability on “being in a simulation”. (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!
I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say.
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.
I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn’t make sense to put a probability on “being in a simulation”. (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
A top-level post on the application of TDT/UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.
A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.