Is it before or after Omega has decided on the contents of the box?
After.
If it’s after, then again you can’t change anything here. You’ll do whatever Omega predicted you’ll do.
I don’t see how that follows. As an analogy, suppose you knew that Omega predicted whether you’d eat salad or pizza for lunch but didn’t know Omega’s prediction. When lunch time rolls around, it’s still useful to think about whether you should eat salad or pizza right?
Let’s formalize it. Your decision procedure is some kind of algorithm p that can do arbitrary computational steps but is deterministic. To model “arbitrary computational steps”, let’s just think of it as a function outputting an arbitrary string representing thoughts or computations or whatnot. The only input to p is the time step since you don’t receive any other information. So in the toy model, p:{t1,t2,t3}→Σ∗. Also your output p(t2) must be such that it determines your decision, so we can define a predicate D that takes your thoughts p(t2) and looks whether you one-box or two-box.
Then the procedure of Omega that fills the opaque box, Ω, is just a function defined by the rule
Ω:p↦{100000 if D(p(t2))=one-box0 if D(p(t2))=two-box
So what Causal Decision Theory allows you to do (and what I feel like you’re still trying to do) is choose the output of p at time t2. But you can’t do this. What you can do is choose p, arbitrarily. You can choose it to always one-box, always two-box, to think “I will one-box” at time t1 and then two-box at time t2, etc. But you don’t get around the fact that every p such that D(p(t2))=one-box ends up with a million dollars and every p such that D(p(t2))=two-box ends up with 1000 dollars. Hence you should choose one of the former kinds of p.
(And yes, I realize that the formalism of letting you output an arbitrary string of thoughts is completely redundant since it just gets reduced to a binary choice anyway, but that’s kinda the point since the same is true for you in the experiment. It doesn’t matter whether you first decide to one-box before you eventually two-box; any choice that ends with you two-boxing is equivalent. The real culprit here is that the intuition of you choosing your action is so hard to get rid off.)
I’m not familiar with some of this notation but I’ll do my best.
It makes sense to me that if you can install a decision algorithm into yourself, at T0 let’s say, then you’d want to install one that one-boxes.
But I don’t think that’s the scenario in Newcomb’s Problem. From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm. But T1 is already in the past, so whatever algorithm you come up with at T2 won’t actually affect what Omega predicts you’ll do (assuming no backwards causality).
From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm.
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.
When exactly is t2? Is it before or after Omega has decided on the contents of the box?
If it’s before, then one-boxing is better. If it’s after, then again you can’t change anything here. You’ll do whatever Omega predicted you’ll do.
After.
I don’t see how that follows. As an analogy, suppose you knew that Omega predicted whether you’d eat salad or pizza for lunch but didn’t know Omega’s prediction. When lunch time rolls around, it’s still useful to think about whether you should eat salad or pizza right?
Let’s formalize it. Your decision procedure is some kind of algorithm p that can do arbitrary computational steps but is deterministic. To model “arbitrary computational steps”, let’s just think of it as a function outputting an arbitrary string representing thoughts or computations or whatnot. The only input to p is the time step since you don’t receive any other information. So in the toy model, p:{t1,t2,t3}→Σ∗. Also your output p(t2) must be such that it determines your decision, so we can define a predicate D that takes your thoughts p(t2) and looks whether you one-box or two-box.
Then the procedure of Omega that fills the opaque box, Ω, is just a function defined by the rule
Ω:p↦{100000 if D(p(t2))=one-box0 if D(p(t2))=two-box
So what Causal Decision Theory allows you to do (and what I feel like you’re still trying to do) is choose the output of p at time t2. But you can’t do this. What you can do is choose p, arbitrarily. You can choose it to always one-box, always two-box, to think “I will one-box” at time t1 and then two-box at time t2, etc. But you don’t get around the fact that every p such that D(p(t2))=one-box ends up with a million dollars and every p such that D(p(t2))=two-box ends up with 1000 dollars. Hence you should choose one of the former kinds of p.
(And yes, I realize that the formalism of letting you output an arbitrary string of thoughts is completely redundant since it just gets reduced to a binary choice anyway, but that’s kinda the point since the same is true for you in the experiment. It doesn’t matter whether you first decide to one-box before you eventually two-box; any choice that ends with you two-boxing is equivalent. The real culprit here is that the intuition of you choosing your action is so hard to get rid off.)
I’m not familiar with some of this notation but I’ll do my best.
It makes sense to me that if you can install a decision algorithm into yourself, at T0 let’s say, then you’d want to install one that one-boxes.
But I don’t think that’s the scenario in Newcomb’s Problem. From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm. But T1 is already in the past, so whatever algorithm you come up with at T2 won’t actually affect what Omega predicts you’ll do (assuming no backwards causality).
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.