I’m not familiar with some of this notation but I’ll do my best.
It makes sense to me that if you can install a decision algorithm into yourself, at T0 let’s say, then you’d want to install one that one-boxes.
But I don’t think that’s the scenario in Newcomb’s Problem. From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm. But T1 is already in the past, so whatever algorithm you come up with at T2 won’t actually affect what Omega predicts you’ll do (assuming no backwards causality).
From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm.
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.
I’m not familiar with some of this notation but I’ll do my best.
It makes sense to me that if you can install a decision algorithm into yourself, at T0 let’s say, then you’d want to install one that one-boxes.
But I don’t think that’s the scenario in Newcomb’s Problem. From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm. But T1 is already in the past, so whatever algorithm you come up with at T2 won’t actually affect what Omega predicts you’ll do (assuming no backwards causality).
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.