So causal thinking in some way seems to violate the deterministic way the world works.
I agree there’s a point here that lots of decision theories / models of agents / etc. are dualistic instead of naturalistic, but I think that’s orthogonal to EDT vs. CDT vs. LDT; all of them assume that you could decide to take any of the actions that are available to you.
My point is that if we assume that we can have a causal influence on the future, then this is already a kind of violation of determinism
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
One could maybe even object to Newcomb’s original problem on similar grounds. Imagine the prediction has already been made 10 years ago. You learned about decision theories and went to one of the gurus in the meantime, and are now confronted with the problem. Are you now free to choose or does the prediction mess with your new, intended action, so that you can’t choose the way you want?
This depends on the formulation of Newcomb’s problem. If it says “Omega predicts you with 99% accuracy” or “Omega always predicts you correctly” (because, say, Omega is Laplace’s Demon), then Omega knew that you would learn about decision theory in the way that you did, and there’s still a logical dependence between the you looking at the boxes in reality and the you looking at the boxes in Omega’s imagination. (This assumes that the 99% fact is known of you in particular, rather than 99% accuracy being something true of humans in general; this gets rid of the case that 99% of the time people’s decision theories don’t change, but 1% of the time they do, and you might be in that camp.)
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on traditional Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is shattered, and two-boxing becomes the correct move.
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on this version of Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is still there, and one-boxing is the correct move.
(Why? Because how can you tell whether you’re the actual you looking at the real boxes, or the you in Omega’s imagination, looking at simulated boxes?)
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
Fair point :) What I meant was that for every world history, there is only one causal influence I could possibly have on the future. But CDT reasons through counterfactuals that are physically impossible (e.g. two-boxing in a world where there is money in box A), because it combines world states with actions it wouldn’t take in those worlds. EDT just assumes that it’s choosing between different histories, which is kind of “magical”, but at least all those histories are internally consistent. Interestingly, e.g. Proof-Based DT would probably amount to the same kind of reasoning? Anyway, it’s probably a weak point if at all, and I fully agree that the issue is orthogonal to the DT question!
I basically agree with everything else you write, and I don’t think it contradicts my main points.
I agree there’s a point here that lots of decision theories / models of agents / etc. are dualistic instead of naturalistic, but I think that’s orthogonal to EDT vs. CDT vs. LDT; all of them assume that you could decide to take any of the actions that are available to you.
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
This depends on the formulation of Newcomb’s problem. If it says “Omega predicts you with 99% accuracy” or “Omega always predicts you correctly” (because, say, Omega is Laplace’s Demon), then Omega knew that you would learn about decision theory in the way that you did, and there’s still a logical dependence between the you looking at the boxes in reality and the you looking at the boxes in Omega’s imagination. (This assumes that the 99% fact is known of you in particular, rather than 99% accuracy being something true of humans in general; this gets rid of the case that 99% of the time people’s decision theories don’t change, but 1% of the time they do, and you might be in that camp.)
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on traditional Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is shattered, and two-boxing becomes the correct move.
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on this version of Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is still there, and one-boxing is the correct move.
(Why? Because how can you tell whether you’re the actual you looking at the real boxes, or the you in Omega’s imagination, looking at simulated boxes?)
Fair point :) What I meant was that for every world history, there is only one causal influence I could possibly have on the future. But CDT reasons through counterfactuals that are physically impossible (e.g. two-boxing in a world where there is money in box A), because it combines world states with actions it wouldn’t take in those worlds. EDT just assumes that it’s choosing between different histories, which is kind of “magical”, but at least all those histories are internally consistent. Interestingly, e.g. Proof-Based DT would probably amount to the same kind of reasoning? Anyway, it’s probably a weak point if at all, and I fully agree that the issue is orthogonal to the DT question!
I basically agree with everything else you write, and I don’t think it contradicts my main points.