In TSL, the presence of a lesion causes a greater probability that the agent will get cancer, and that the agent will smoke. The EDT way of computing counterfactuals is to set a value for the agents decision and to look at the effect on events that are caused by that choice, and to treat the choice as Bayesian evidence of events that cause that choice. To EDT it appears that not smoking reduces that probability of having the lesion, and therefor of having cancer. The fact that EDT cannot represent that whether the agent has cancer is independent of its choice is a problem with EDT.
To better clarify what I said in my other comment, imagine we do TSL with a curveball. Let’s say that instead the gene will make you smoke. That is, everyone with the gene will certainly smoke. In other words, everyone with the gene will reach the decision to smoke.
The gene will also give the person cancer. And you still retain the sensation of a making a decision about whether you will smoke.
In that case, it most certainly does appear as if I’m choosing whether to have the gene. If I choose to smoke—hey, the gene’s power overtook me! If I don’t, well, then I must not have had it all along.
This appears to me isomorphic to Newcomb’s problem, which makes sense, given that EDT wins there.
But then, in the original TSL problem, why shouldn’t I take into account the fact that my reasoning would be corrupted by the gene? (“Hey, don’t worry man, your future cancer is already a done deal! Doesn’t matter if you light up now! Come on, it’s fun!” “Hey, that’s just the gene talking! I’m a rational person not corrupted by that logic!”)
This appears to me isomorphic to Newcomb’s problem, which makes sense, given that EDT wins there.
It is not isomorphic when you apply Timeless Decision Theory, and add the node that represents the result of your decision theory. This node correlates with Omega’s prediction in Newcomb’s problem, but the corresponding node in strengthened TSL does not correlate with the presence of the gene. Counterfactually altering that node does not change whether you have the gene.
But the scenario does not really make sense. If everyone with the gene will smoke, and everyone who uses EDT chooses not to smoke, then you could eliminate the gene from a population by teaching everyone to use EDT. I think the ultimate problem is that a scenario that dictates the agents choice regardless of their decision theory is a scenario in which decision theories cannot be faithfully executed, that is the scenario is denying that the agent’s innards can implement any decision theory that produces a different choice.
Counterfactually altering that node does not change whether you have the gene.
But the scenario does not really make sense
Under the (strange) stipulations I gave, altering the node does alter the gene. The fact that it doesn’t make sense is a result of the situation’s parallel with Newcomb’s problem, which, as Psychohistorian argues, requires an equally non-sensical scenario.
I think the ultimate problem is that a scenario that dictates the agents choice regardless of their decision theory is a scenario in which decision theories cannot be faithfully executed, that is the scenario is denying that the agent’s innards can implement any decision theory that produces a different choice.
But this problem arises just the same in Newcomb’s problem! If Omega perfectly predicts your choice, then you can’t do anything but that choice, and the problem is equally meaningless: your choice is dictated irrespective of your decision theory. Just as we could eliminate the gene by teaching EDT, we could make Omega always fill box B with money by teaching TDT.
Just as we could eliminate the gene by teaching EDT, we could make Omega always fill box B with money by teaching TDT.
It makes sense that you can alter Omega’s prediction by altering the agents decision theory because the decision theory is examined in making the prediction. This does not correspond to the smoking genes. The inheritance of genes that do not cause a person to have a particular decision theory (or are correlated with genes that do) is not correlated with a person having a decision theory. And if you are postulating that the smoking gene also causes the person to have a particular decision theory, then you have a fully general counterargument against any decision theory: just suppose it is caused by the same gene that causes cancer.
It makes sense that you can alter Omega’s prediction by altering the agents decision theory because the decision theory is examined in making the prediction.
But you can’t alter Omega’s prediction at the point where you enter the problem, just like you can’t alter the presence of the gene at the point where you enter the TSL problem. (Yes, redundancy, I know, but it flows better.)
The inheritance of genes that do not cause a person to have a particular decision theory (or are correlated with genes that do) is not correlated with a person having a decision theory …
Well, under the altered TSL problem I posited, the gene does cause a particular decision theory (or at least, limits you to those decision theories that result in a decision to smoke).
And if you are postulating that the smoking gene also causes the person to have a particular decision theory, then you have a fully general counterargument against any decision theory: just suppose it is caused by the same gene that causes cancer.
And I also have a fully general counterargument against any decision theory in Newcomb’s problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
Bringing this back to the original topic: Psychohistorian appears correct to say that the problems force you to make contradictory assumptions.
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
And I also have a fully general counterargument against any decision theory in Newcomb’s problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
I don’t see it. Would you mind pointing out the obvious for me?
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
The modified smoking lesion problem I just gave. TDT reasons (parallel to the normal smoking lesion) that “I have the gene or I don’t, so it doesn’t matter what I do”. But strangely, everyone who doesn’t smoke ends up not getting cancer.
The modified smoking problem lesion problem is not based on Omega making predictions. If you tried to come up with such an example which stumps TDT, you will run into the asymmetries between Omega’s predictions and the common cause gene.
The modified smoking problem lesion problem is not based on Omega making predictions
It still maps over. You just replace “omega predicts one or two box” with “you have or don’t have the gene”. “Omega predicts one box” corresponds to not having the gene.
I meant that something takes the functional equivalent of Omega. There is a dissimilarity, but not enough to make it irrelevant. The point that Psychohistorian and I are making is that the problems have subtly contradictory premises, which I think the examples (including modified TSL) show. Because the premises are contradictory, you can assume away a different one in each case.
In the original TSL, TDT says “hey, it’s decided anyway whether I have cancer, so my choice doesn’t affect my cancer”. But in Newcomb’s problem, TDT says, “even though omega has decided the contents of the box, my choice affects my reward”.
(note: I’m not Psychohistorian)
To better clarify what I said in my other comment, imagine we do TSL with a curveball. Let’s say that instead the gene will make you smoke. That is, everyone with the gene will certainly smoke. In other words, everyone with the gene will reach the decision to smoke.
The gene will also give the person cancer. And you still retain the sensation of a making a decision about whether you will smoke.
In that case, it most certainly does appear as if I’m choosing whether to have the gene. If I choose to smoke—hey, the gene’s power overtook me! If I don’t, well, then I must not have had it all along.
This appears to me isomorphic to Newcomb’s problem, which makes sense, given that EDT wins there.
But then, in the original TSL problem, why shouldn’t I take into account the fact that my reasoning would be corrupted by the gene? (“Hey, don’t worry man, your future cancer is already a done deal! Doesn’t matter if you light up now! Come on, it’s fun!” “Hey, that’s just the gene talking! I’m a rational person not corrupted by that logic!”)
It is not isomorphic when you apply Timeless Decision Theory, and add the node that represents the result of your decision theory. This node correlates with Omega’s prediction in Newcomb’s problem, but the corresponding node in strengthened TSL does not correlate with the presence of the gene. Counterfactually altering that node does not change whether you have the gene.
But the scenario does not really make sense. If everyone with the gene will smoke, and everyone who uses EDT chooses not to smoke, then you could eliminate the gene from a population by teaching everyone to use EDT. I think the ultimate problem is that a scenario that dictates the agents choice regardless of their decision theory is a scenario in which decision theories cannot be faithfully executed, that is the scenario is denying that the agent’s innards can implement any decision theory that produces a different choice.
Under the (strange) stipulations I gave, altering the node does alter the gene. The fact that it doesn’t make sense is a result of the situation’s parallel with Newcomb’s problem, which, as Psychohistorian argues, requires an equally non-sensical scenario.
But this problem arises just the same in Newcomb’s problem! If Omega perfectly predicts your choice, then you can’t do anything but that choice, and the problem is equally meaningless: your choice is dictated irrespective of your decision theory. Just as we could eliminate the gene by teaching EDT, we could make Omega always fill box B with money by teaching TDT.
It makes sense that you can alter Omega’s prediction by altering the agents decision theory because the decision theory is examined in making the prediction. This does not correspond to the smoking genes. The inheritance of genes that do not cause a person to have a particular decision theory (or are correlated with genes that do) is not correlated with a person having a decision theory. And if you are postulating that the smoking gene also causes the person to have a particular decision theory, then you have a fully general counterargument against any decision theory: just suppose it is caused by the same gene that causes cancer.
Sorry, I missed this when you first posted it.
But you can’t alter Omega’s prediction at the point where you enter the problem, just like you can’t alter the presence of the gene at the point where you enter the TSL problem. (Yes, redundancy, I know, but it flows better.)
Well, under the altered TSL problem I posited, the gene does cause a particular decision theory (or at least, limits you to those decision theories that result in a decision to smoke).
And I also have a fully general counterargument against any decision theory in Newcomb’s problem too! It (the decision theory) was caused by the same observation by Omega that led it to choose what to put in the second box.
Bringing this back to the original topic: Psychohistorian appears correct to say that the problems force you to make contradictory assumptions.
You are presenting a symmetry between the two cases by ignoring details. If you look at which events cause which, you can see the differences.
So, you can make a Newcomb like problem (Omega makes a decision based on its prediction of your decision in a way that it explains to you before making the decision) in which TDT does not win?
I don’t see it. Would you mind pointing out the obvious for me?
The modified smoking lesion problem I just gave. TDT reasons (parallel to the normal smoking lesion) that “I have the gene or I don’t, so it doesn’t matter what I do”. But strangely, everyone who doesn’t smoke ends up not getting cancer.
The modified smoking problem lesion problem is not based on Omega making predictions. If you tried to come up with such an example which stumps TDT, you will run into the asymmetries between Omega’s predictions and the common cause gene.
It still maps over. You just replace “omega predicts one or two box” with “you have or don’t have the gene”. “Omega predicts one box” corresponds to not having the gene.
If it maps over, why does TDT one box in Newcomb’s problem and smoke in the modified smoking lesion problem?
I meant that something takes the functional equivalent of Omega. There is a dissimilarity, but not enough to make it irrelevant. The point that Psychohistorian and I are making is that the problems have subtly contradictory premises, which I think the examples (including modified TSL) show. Because the premises are contradictory, you can assume away a different one in each case.
In the original TSL, TDT says “hey, it’s decided anyway whether I have cancer, so my choice doesn’t affect my cancer”. But in Newcomb’s problem, TDT says, “even though omega has decided the contents of the box, my choice affects my reward”.