There seems to be an especially strong intuition of “absence of free will” inherent to the Coin Flip Creation problem. When presented with the problem, many respond that if someone had created their source code, they didn’t have any choice to begin with. But that’s the exact situation in which we all find ourselves at all times!
I think this is missing the point of the objection.
Consider the three different decision theories, CDT, EDT, and LDT; suppose there are three gurus who teach those decision theories to any orphans left in their care. And suppose Omega does the coin flip six times, ends up with three heads children and three tails children, and gives a matched pair to each of the gurus.
When the day comes, the first set of children reason that they can’t change the coinflip because the lack of causal dependence, and try to take both boxes. One succeeds, and the other discovers that, mysteriously, they one-boxed instead, and got the million.
The second set of children reason that taking one box is correlated with having the million, and so they try to take just the one box. One succeeds, and the other discovers that, mysteriously, they two-boxed instead, and only got the thousand.
The third set, you know the drill. One one-boxes, the other two-boxes.
The point of decision theories is not that they let you reach from beyond the Matrix and change reality in violation of physics; it’s that you predictably act in ways that optimize for various criteria. But this is a decision problem where your action has been divorced from your intended action, and so attributing the victory of heads children to EDT is mistaken, because of the tails child with EDT who wanted to two-box but couldn’t.
The point of decision theories is not that they let you reach from beyond the Matrix and change reality in violation of physics; it’s that you predictably act in ways that optimize for various criteria.
I agree with this. But I would argue that causal counterfactuals somehow assume that we can “reach from beyond the Matrix and change reality in violation of physics”. They work by comparing what would happen if we detached our “action node” from its ancestor nodes and manipulated it in different ways. So causal thinking in some way seems to violate the deterministic way the world works. Needless to say, all decision theories somehow have to reason through counterfactuals, so they all have to form “impossible” hypotheses. My point is that if we assume that we can have a causal influence on the future, then this is already a kind of violation of determinism, and I would reason that assuming that we can also have a retro-causal one on the past doesn’t necessarily make things worse. In some sense, it might even be more in line with how the world works: the future is as fixed as the past, and the EDT approach is to merely “find out” which respective past and future are true.
But this is a decision problem where your action has been divorced from your intended action, and so attributing the victory of heads children to EDT is mistaken, because of the tails child with EDT who wanted to two-box but couldn’t.
Hmm, I’m not sure. It seems as thought in your setup, the gurus have to change the children’s decision algorithms, in which case of course the correlation would vanish. Or the children use a meta decision theory like “think about the topic and consider what the guru tells you and then try to somehow do whatever winning means”. But if Omega created you with the intention of making you one-box or two-box, it could easily just have added some rule or change the meta theory so that you would end up just not being convinced of the “wrong” theory. You would have magically ended up doing (and thinking) the right thing, without “wanting” but not “being able to”. I mean, I am trying to convince you of some decision theory right now, and you already have some knowledge and meta decision theory that ultimately will lead you to either adopt or reject it. Maybe the fact that you’re not yet convinced shows that you’re living in the tails world? ;) Maybe Omega’s trick is to make the tails people think about guru cases in order to get them to reject EDT?
One could maybe even object to Newcomb’s original problem on similar grounds. Imagine the prediction has already been made 10 years ago. You learned about decision theories and went to one of the gurus in the meantime, and are now confronted with the problem. Are you now free to choose or does the prediction mess with your new, intended action, so that you can’t choose the way you want? I don’t believe so – you’ll feel just as free to choose as if the prediction had happened 10 minutes ago. Only after deciding freely, you find out that you have been determined to decide this way from the beginning, because Omega of course also accounted for the guru.
In general, I tend to think that adding some “outside influence” to a Newcomb’s problem either makes it a different decision problem, or it’s irrelevant and just confuses things.
So causal thinking in some way seems to violate the deterministic way the world works.
I agree there’s a point here that lots of decision theories / models of agents / etc. are dualistic instead of naturalistic, but I think that’s orthogonal to EDT vs. CDT vs. LDT; all of them assume that you could decide to take any of the actions that are available to you.
My point is that if we assume that we can have a causal influence on the future, then this is already a kind of violation of determinism
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
One could maybe even object to Newcomb’s original problem on similar grounds. Imagine the prediction has already been made 10 years ago. You learned about decision theories and went to one of the gurus in the meantime, and are now confronted with the problem. Are you now free to choose or does the prediction mess with your new, intended action, so that you can’t choose the way you want?
This depends on the formulation of Newcomb’s problem. If it says “Omega predicts you with 99% accuracy” or “Omega always predicts you correctly” (because, say, Omega is Laplace’s Demon), then Omega knew that you would learn about decision theory in the way that you did, and there’s still a logical dependence between the you looking at the boxes in reality and the you looking at the boxes in Omega’s imagination. (This assumes that the 99% fact is known of you in particular, rather than 99% accuracy being something true of humans in general; this gets rid of the case that 99% of the time people’s decision theories don’t change, but 1% of the time they do, and you might be in that camp.)
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on traditional Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is shattered, and two-boxing becomes the correct move.
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on this version of Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is still there, and one-boxing is the correct move.
(Why? Because how can you tell whether you’re the actual you looking at the real boxes, or the you in Omega’s imagination, looking at simulated boxes?)
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
Fair point :) What I meant was that for every world history, there is only one causal influence I could possibly have on the future. But CDT reasons through counterfactuals that are physically impossible (e.g. two-boxing in a world where there is money in box A), because it combines world states with actions it wouldn’t take in those worlds. EDT just assumes that it’s choosing between different histories, which is kind of “magical”, but at least all those histories are internally consistent. Interestingly, e.g. Proof-Based DT would probably amount to the same kind of reasoning? Anyway, it’s probably a weak point if at all, and I fully agree that the issue is orthogonal to the DT question!
I basically agree with everything else you write, and I don’t think it contradicts my main points.
“because of the tails child with EDT who wanted to two-box but couldn’t.”
This is also a very common situation in the real world: deciding to do something and then going and doing something else instead, like when you decide to do your work and then waste your time instead.
I think this is missing the point of the objection.
Consider the three different decision theories, CDT, EDT, and LDT; suppose there are three gurus who teach those decision theories to any orphans left in their care. And suppose Omega does the coin flip six times, ends up with three heads children and three tails children, and gives a matched pair to each of the gurus.
When the day comes, the first set of children reason that they can’t change the coinflip because the lack of causal dependence, and try to take both boxes. One succeeds, and the other discovers that, mysteriously, they one-boxed instead, and got the million.
The second set of children reason that taking one box is correlated with having the million, and so they try to take just the one box. One succeeds, and the other discovers that, mysteriously, they two-boxed instead, and only got the thousand.
The third set, you know the drill. One one-boxes, the other two-boxes.
The point of decision theories is not that they let you reach from beyond the Matrix and change reality in violation of physics; it’s that you predictably act in ways that optimize for various criteria. But this is a decision problem where your action has been divorced from your intended action, and so attributing the victory of heads children to EDT is mistaken, because of the tails child with EDT who wanted to two-box but couldn’t.
(Also, Betteridge’s Law.)
I agree with this. But I would argue that causal counterfactuals somehow assume that we can “reach from beyond the Matrix and change reality in violation of physics”. They work by comparing what would happen if we detached our “action node” from its ancestor nodes and manipulated it in different ways. So causal thinking in some way seems to violate the deterministic way the world works. Needless to say, all decision theories somehow have to reason through counterfactuals, so they all have to form “impossible” hypotheses. My point is that if we assume that we can have a causal influence on the future, then this is already a kind of violation of determinism, and I would reason that assuming that we can also have a retro-causal one on the past doesn’t necessarily make things worse. In some sense, it might even be more in line with how the world works: the future is as fixed as the past, and the EDT approach is to merely “find out” which respective past and future are true.
Hmm, I’m not sure. It seems as thought in your setup, the gurus have to change the children’s decision algorithms, in which case of course the correlation would vanish. Or the children use a meta decision theory like “think about the topic and consider what the guru tells you and then try to somehow do whatever winning means”. But if Omega created you with the intention of making you one-box or two-box, it could easily just have added some rule or change the meta theory so that you would end up just not being convinced of the “wrong” theory. You would have magically ended up doing (and thinking) the right thing, without “wanting” but not “being able to”. I mean, I am trying to convince you of some decision theory right now, and you already have some knowledge and meta decision theory that ultimately will lead you to either adopt or reject it. Maybe the fact that you’re not yet convinced shows that you’re living in the tails world? ;) Maybe Omega’s trick is to make the tails people think about guru cases in order to get them to reject EDT?
One could maybe even object to Newcomb’s original problem on similar grounds. Imagine the prediction has already been made 10 years ago. You learned about decision theories and went to one of the gurus in the meantime, and are now confronted with the problem. Are you now free to choose or does the prediction mess with your new, intended action, so that you can’t choose the way you want? I don’t believe so – you’ll feel just as free to choose as if the prediction had happened 10 minutes ago. Only after deciding freely, you find out that you have been determined to decide this way from the beginning, because Omega of course also accounted for the guru.
In general, I tend to think that adding some “outside influence” to a Newcomb’s problem either makes it a different decision problem, or it’s irrelevant and just confuses things.
I agree there’s a point here that lots of decision theories / models of agents / etc. are dualistic instead of naturalistic, but I think that’s orthogonal to EDT vs. CDT vs. LDT; all of them assume that you could decide to take any of the actions that are available to you.
I suspect this is a confusion about free will. To be concrete, I think that a thermostat has a causal influence on the future, and does not violate determinism. It deterministically observes a sensor, and either turns on a heater or a cooler based on that sensor, in a way that does not flow backwards—turning on the heater manually will not affect the thermostat’s attempted actions except indirectly through the eventual effect on the sensor.
This depends on the formulation of Newcomb’s problem. If it says “Omega predicts you with 99% accuracy” or “Omega always predicts you correctly” (because, say, Omega is Laplace’s Demon), then Omega knew that you would learn about decision theory in the way that you did, and there’s still a logical dependence between the you looking at the boxes in reality and the you looking at the boxes in Omega’s imagination. (This assumes that the 99% fact is known of you in particular, rather than 99% accuracy being something true of humans in general; this gets rid of the case that 99% of the time people’s decision theories don’t change, but 1% of the time they do, and you might be in that camp.)
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on traditional Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is shattered, and two-boxing becomes the correct move.
If instead the formulation is “Omega observed the you of 10 years ago, and was able to determine whether or not you then would have one-boxed or two-boxed on this version of Newcomb’s with perfect accuracy. The boxes just showed up now, and you have to decide whether to take one or both,” then the logical dependence is still there, and one-boxing is the correct move.
(Why? Because how can you tell whether you’re the actual you looking at the real boxes, or the you in Omega’s imagination, looking at simulated boxes?)
Fair point :) What I meant was that for every world history, there is only one causal influence I could possibly have on the future. But CDT reasons through counterfactuals that are physically impossible (e.g. two-boxing in a world where there is money in box A), because it combines world states with actions it wouldn’t take in those worlds. EDT just assumes that it’s choosing between different histories, which is kind of “magical”, but at least all those histories are internally consistent. Interestingly, e.g. Proof-Based DT would probably amount to the same kind of reasoning? Anyway, it’s probably a weak point if at all, and I fully agree that the issue is orthogonal to the DT question!
I basically agree with everything else you write, and I don’t think it contradicts my main points.
“because of the tails child with EDT who wanted to two-box but couldn’t.”
This is also a very common situation in the real world: deciding to do something and then going and doing something else instead, like when you decide to do your work and then waste your time instead.