I usually deal with people who don’t have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.
Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.
But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.
A similar analysis applies to the counterfactual mugging.
Further, this argument actually creates immunity to the response ‘I’ll just find a qubit arbitrarily far back in time and use the measurement result to decide.’ I think a self-respecting TDT would also have this immunity, but there’s a lot to be said for finding out where theories fail—and Newcomb’s problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.
Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as ‘choose A that maximises
%20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.’
I usually deal with people who don’t have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.
Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.
But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.
A similar analysis applies to the counterfactual mugging.
Further, this argument actually creates immunity to the response ‘I’ll just find a qubit arbitrarily far back in time and use the measurement result to decide.’ I think a self-respecting TDT would also have this immunity, but there’s a lot to be said for finding out where theories fail—and Newcomb’s problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.
Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as ‘choose A that maximises
%20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.’