How confident are you that you understand TDT better than Eliezer does? Because he seems to think that TDT implies smoking.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible. I do know that Eliezer is in favor both of one boxing and of cooperating in the Prisoner’s Dilemma, and both of those require the kind of reasoning that leads to not smoking. That is why I said that I “suspect” that TDT means not smoking.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible.
Since Eliezer is on record as saying that TDT advocates non-corresponding answers to Newcomb and the Smoking Lesion, it seems to me that you should at the very least be extremely uncertain about at least one of (1) whether TDT is actually sensible, (2) whether Eliezer actually understands his own theory, and (3) whether you are correct about sensible theories giving corresponding answers in those cases.
Because if sensible ⇒ corresponding answers and TDT is sensible, then it gives corresponding answers; and if Eliezer understands his own theory then it doesn’t give corresponding answers.
I looked back at some of Eliezer’s early posts on this and they certainly didn’t claim to be fully worked out; he said things like “this part is still magic,” and so on. However, I have significantly increased my estimate of the possibility that TDT might be incoherent, or at any rate arbitrary; he did seem to want to say that you would consider yourself the cause of the million being in the box, and I don’t think it is true in any non-arbitrary way that you should consider yourself the cause of the million, and not of whether you have the lesion. As an example (which is certainly very different from Eliezer saying it), bogus seemed to assert that it was just the presentation of the problem, namely whether you count yourself as being able to affect something or not.
I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible.
I think you don’t quite understand either how TDT is supposed to work, or how the way it works can be “sensible”. If you exogenously alter every “smoke” decision to “don’t smoke” in Smoking Lesion, your payoff doesn’t improve, by construction. If you exogenously alter every “two-box” decision to “one box”, this does change your payoff. Note the ‘exogenously’ qualification above, which is quite important—and note that the “exogenous” change must alter all logically-connected choices in the same way: in Newcomb, the very same exogenous input acts on Omega’s prediction as on your actual choice; and in Smoking Lesion, the change to “smoke” or “don’t smoke” occurs regardless of whether you have the Smoking Lesion or not.
(It might be that you could express the problems in EDT in a way that leads to the correct choice, by adding hardwired models of these “exogenous but logically-connected” decisions. But this isn’t something that most EDT advocates would describe as a necessary part of that theory—and this is all the more true if a similar change could work for CDT!)
Omega’s prediction in reality is based on the physical state of your brain. So if altering your choice in Newcomb alters Omega’s prediction, it also alters the state of your brain. And if that is the case, it can alter the state of your brain when you choose not to smoke in the Smoking Lesion.
The ‘state of your brain’ in Newcomb and Smoking Lesion need not be directly comparable. If you could alter the state of your brain in a way that makes you better off in Smoking Lesion just by exogenously forcing the “don’t smoke” choice, then the problem statement wouldn’t be allowed to include the proviso that choosing “don’t smoke” doesn’t improve your payoff.
The problem statement does not include the proviso that choosing not to smoke does not improve the payoff. It just says that if you have the lesion, you get cancer, and if you don’t, you don’t. And it says that people who choose to smoke, turn out to have the lesion, and people who choose not to smoke, turn out not to have the lesion. No proviso about not smoking not improving the payoff.
You might be right. But then TDT chooses not to smoke precisely when CDT does, because there is nothing that’s logically-but-not-physically/causally connected with the exogenous decision whether or not to smoke. Which arguably makes this version of the problem quite uninteresting.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible. I do know that Eliezer is in favor both of one boxing and of cooperating in the Prisoner’s Dilemma, and both of those require the kind of reasoning that leads to not smoking. That is why I said that I “suspect” that TDT means not smoking.
Since Eliezer is on record as saying that TDT advocates non-corresponding answers to Newcomb and the Smoking Lesion, it seems to me that you should at the very least be extremely uncertain about at least one of (1) whether TDT is actually sensible, (2) whether Eliezer actually understands his own theory, and (3) whether you are correct about sensible theories giving corresponding answers in those cases.
Because if sensible ⇒ corresponding answers and TDT is sensible, then it gives corresponding answers; and if Eliezer understands his own theory then it doesn’t give corresponding answers.
I looked back at some of Eliezer’s early posts on this and they certainly didn’t claim to be fully worked out; he said things like “this part is still magic,” and so on. However, I have significantly increased my estimate of the possibility that TDT might be incoherent, or at any rate arbitrary; he did seem to want to say that you would consider yourself the cause of the million being in the box, and I don’t think it is true in any non-arbitrary way that you should consider yourself the cause of the million, and not of whether you have the lesion. As an example (which is certainly very different from Eliezer saying it), bogus seemed to assert that it was just the presentation of the problem, namely whether you count yourself as being able to affect something or not.
I think you don’t quite understand either how TDT is supposed to work, or how the way it works can be “sensible”. If you exogenously alter every “smoke” decision to “don’t smoke” in Smoking Lesion, your payoff doesn’t improve, by construction. If you exogenously alter every “two-box” decision to “one box”, this does change your payoff. Note the ‘exogenously’ qualification above, which is quite important—and note that the “exogenous” change must alter all logically-connected choices in the same way: in Newcomb, the very same exogenous input acts on Omega’s prediction as on your actual choice; and in Smoking Lesion, the change to “smoke” or “don’t smoke” occurs regardless of whether you have the Smoking Lesion or not.
(It might be that you could express the problems in EDT in a way that leads to the correct choice, by adding hardwired models of these “exogenous but logically-connected” decisions. But this isn’t something that most EDT advocates would describe as a necessary part of that theory—and this is all the more true if a similar change could work for CDT!)
Omega’s prediction in reality is based on the physical state of your brain. So if altering your choice in Newcomb alters Omega’s prediction, it also alters the state of your brain. And if that is the case, it can alter the state of your brain when you choose not to smoke in the Smoking Lesion.
The ‘state of your brain’ in Newcomb and Smoking Lesion need not be directly comparable. If you could alter the state of your brain in a way that makes you better off in Smoking Lesion just by exogenously forcing the “don’t smoke” choice, then the problem statement wouldn’t be allowed to include the proviso that choosing “don’t smoke” doesn’t improve your payoff.
The problem statement does not include the proviso that choosing not to smoke does not improve the payoff. It just says that if you have the lesion, you get cancer, and if you don’t, you don’t. And it says that people who choose to smoke, turn out to have the lesion, and people who choose not to smoke, turn out not to have the lesion. No proviso about not smoking not improving the payoff.
You might be right. But then TDT chooses not to smoke precisely when CDT does, because there is nothing that’s logically-but-not-physically/causally connected with the exogenous decision whether or not to smoke. Which arguably makes this version of the problem quite uninteresting.