You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
And I also have a fully general counterargument against any decision theory in Newcomb’s problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
I don’t see it. Would you mind pointing out the obvious for me?
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
The modified smoking lesion problem I just gave. TDT reasons (parallel to the normal smoking lesion) that “I have the gene or I don’t, so it doesn’t matter what I do”. But strangely, everyone who doesn’t smoke ends up not getting cancer.
The modified smoking problem lesion problem is not based on Omega making predictions. If you tried to come up with such an example which stumps TDT, you will run into the asymmetries between Omega’s predictions and the common cause gene.
The modified smoking problem lesion problem is not based on Omega making predictions
It still maps over. You just replace “omega predicts one or two box” with “you have or don’t have the gene”. “Omega predicts one box” corresponds to not having the gene.
I meant that something takes the functional equivalent of Omega. There is a dissimilarity, but not enough to make it irrelevant. The point that Psychohistorian and I are making is that the problems have subtly contradictory premises, which I think the examples (including modified TSL) show. Because the premises are contradictory, you can assume away a different one in each case.
In the original TSL, TDT says “hey, it’s decided anyway whether I have cancer, so my choice doesn’t affect my cancer”. But in Newcomb’s problem, TDT says, “even though omega has decided the contents of the box, my choice affects my reward”.
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
I don’t see it. Would you mind pointing out the obvious for me?
The modified smoking lesion problem I just gave. TDT reasons (parallel to the normal smoking lesion) that “I have the gene or I don’t, so it doesn’t matter what I do”. But strangely, everyone who doesn’t smoke ends up not getting cancer.
The modified smoking problem lesion problem is not based on Omega making predictions. If you tried to come up with such an example which stumps TDT, you will run into the asymmetries between Omega’s predictions and the common cause gene.
It still maps over. You just replace “omega predicts one or two box” with “you have or don’t have the gene”. “Omega predicts one box” corresponds to not having the gene.
If it maps over, why does TDT one box in Newcomb’s problem and smoke in the modified smoking lesion problem?
I meant that something takes the functional equivalent of Omega. There is a dissimilarity, but not enough to make it irrelevant. The point that Psychohistorian and I are making is that the problems have subtly contradictory premises, which I think the examples (including modified TSL) show. Because the premises are contradictory, you can assume away a different one in each case.
In the original TSL, TDT says “hey, it’s decided anyway whether I have cancer, so my choice doesn’t affect my cancer”. But in Newcomb’s problem, TDT says, “even though omega has decided the contents of the box, my choice affects my reward”.