I have never agreed that there is a difference between the smoking lesion and Newcomb’s problem. I would one-box, and I would not smoke. Long discussion in the comments here.
Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?
Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT.
The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb’s problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb.
Your post reveals that the smoking lesion is the same as Newcomb’s problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken.
Personally I haven’t seen any real refutation of EDT.
That does seem like the tentative consensus, and I was unpleasantly surprised to see someone on LW who would not chew the gum.
We should be asking what decision procedure gives us more money, e.g. if we’re writing a computer program to make a decision for us. You may be tempted to say that if Omega is physical—a premise not usually stated explicitly, but one I’m happy to grant—then it must be looking at some physical events linked to your action and not looking at the answer given by your abstract decision procedure. A procedure based on that assumption would lead you to two-box. This thinking seems likely to hurt you in analogous real-life situations, unless you have greater skill at lying or faking signals than (my model of) either a random human being or a random human of high intelligence. Discussing it, even ‘anonymously’, would constitute further evidence that you lack the skill to make this work.
Now TDT, as I understand it, assumes that we can include in our graph a node for the answer given by an abstract logical process. For example, to predict the effects of pushing some buttons on a calculator, we would look at both the result of a “timeless” logical process and also some physical nodes that determine whether or not the calculator follows that process.
Let’s say you have a similar model of yourself. Then if and only if your model of the world says that the abstract answer given by your decision procedure does not sufficiently determine Omega’s action, then a counterfactual question about that answer will tell you to two-box. But if Omega when examining physical evidence just looks at the physical nodes which (sufficiently) determine whether or not you will use TDT (or whatever decision procedure you’re using), then presumably Omega knows what answer that process gives, which will help determine the result. A counterfactual question about the logical output would then tell you to one-box. TDT I think asks that question and gets that answer. UDT I barely understand at all.
(The TDT answer to the OP’s problem depends on how we interpret “two-boxing gene”.)
I have never agreed that there is a difference between the smoking lesion and Newcomb’s problem. I would one-box, and I would not smoke. Long discussion in the comments here.
Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?
Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT.
The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb’s problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb.
Your post reveals that the smoking lesion is the same as Newcomb’s problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken.
Personally I haven’t seen any real refutation of EDT.
That does seem like the tentative consensus, and I was unpleasantly surprised to see someone on LW who would not chew the gum.
We should be asking what decision procedure gives us more money, e.g. if we’re writing a computer program to make a decision for us. You may be tempted to say that if Omega is physical—a premise not usually stated explicitly, but one I’m happy to grant—then it must be looking at some physical events linked to your action and not looking at the answer given by your abstract decision procedure. A procedure based on that assumption would lead you to two-box. This thinking seems likely to hurt you in analogous real-life situations, unless you have greater skill at lying or faking signals than (my model of) either a random human being or a random human of high intelligence. Discussing it, even ‘anonymously’, would constitute further evidence that you lack the skill to make this work.
Now TDT, as I understand it, assumes that we can include in our graph a node for the answer given by an abstract logical process. For example, to predict the effects of pushing some buttons on a calculator, we would look at both the result of a “timeless” logical process and also some physical nodes that determine whether or not the calculator follows that process.
Let’s say you have a similar model of yourself. Then if and only if your model of the world says that the abstract answer given by your decision procedure does not sufficiently determine Omega’s action, then a counterfactual question about that answer will tell you to two-box. But if Omega when examining physical evidence just looks at the physical nodes which (sufficiently) determine whether or not you will use TDT (or whatever decision procedure you’re using), then presumably Omega knows what answer that process gives, which will help determine the result. A counterfactual question about the logical output would then tell you to one-box. TDT I think asks that question and gets that answer. UDT I barely understand at all.
(The TDT answer to the OP’s problem depends on how we interpret “two-boxing gene”.)