Yes, I agree with you—but when you tell some people that the question arises of what is in the big-money box after Omega leaves… and the answer is “if you’re considering this, nothing.”
A lot of others (non-LW people) I tell this to say it doesn’t sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with ‘there is no free will’), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways).
** Anna Salamon’s cutting argument is very good too but a) it doesn’t make the closed-loop-seeming thing any less closed-loop-seeming and b) it’s hard to understand for most people and I’m guessing it will look like garbage to people who don’t default to compatibilist.
I suppose. When dealing with believers in noncompatibilist free will, I typically just accept that on their view a reliable Predictor is not possible in the first place, and so they have two choices… either refuse to engage with the thought experiment at all, or accept that for purposes of this thought experiment they’ve been demonstrated empirically to be wrong about the possibility of a reliable Predictor (and consequently about their belief in free will).
That said, I can respect someone refusing to engage with a thought experiment at all, if they consider the implications of the thought experiment absurd.
As long as we’re here, I can also respect someone whose answer to “Assume Predictor yadda yadda what do you do?” is “How should I know what I do? I am not a Predictor. I do whatever it is someone like me does in that situation; beats me what that actually is.”
I usually deal with people who don’t have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.
Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.
But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.
A similar analysis applies to the counterfactual mugging.
Further, this argument actually creates immunity to the response ‘I’ll just find a qubit arbitrarily far back in time and use the measurement result to decide.’ I think a self-respecting TDT would also have this immunity, but there’s a lot to be said for finding out where theories fail—and Newcomb’s problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.
Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as ‘choose A that maximises
%20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.’
Yes, I agree with you—but when you tell some people that the question arises of what is in the big-money box after Omega leaves… and the answer is “if you’re considering this, nothing.”
A lot of others (non-LW people) I tell this to say it doesn’t sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with ‘there is no free will’), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways).
** Anna Salamon’s cutting argument is very good too but a) it doesn’t make the closed-loop-seeming thing any less closed-loop-seeming and b) it’s hard to understand for most people and I’m guessing it will look like garbage to people who don’t default to compatibilist.
I suppose.
When dealing with believers in noncompatibilist free will, I typically just accept that on their view a reliable Predictor is not possible in the first place, and so they have two choices… either refuse to engage with the thought experiment at all, or accept that for purposes of this thought experiment they’ve been demonstrated empirically to be wrong about the possibility of a reliable Predictor (and consequently about their belief in free will).
That said, I can respect someone refusing to engage with a thought experiment at all, if they consider the implications of the thought experiment absurd.
As long as we’re here, I can also respect someone whose answer to “Assume Predictor yadda yadda what do you do?” is “How should I know what I do? I am not a Predictor. I do whatever it is someone like me does in that situation; beats me what that actually is.”
I usually deal with people who don’t have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.
Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.
But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.
A similar analysis applies to the counterfactual mugging.
Further, this argument actually creates immunity to the response ‘I’ll just find a qubit arbitrarily far back in time and use the measurement result to decide.’ I think a self-respecting TDT would also have this immunity, but there’s a lot to be said for finding out where theories fail—and Newcomb’s problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.
Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as ‘choose A that maximises
%20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.’