From what I understand, in Newcomb’s Problem, you’re sitting there at T2, confronted by Omega, never having thought about any of this stuff before (let’s suppose). At that point you can come up with a decision algorithm.
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.
With this sentence, you’re again putting yourself outside the experiment; you get a model where you-the-person-in-the-experiment is one thing inside the experiment, and you-the-agent is another thing sitting outside, choosing what your brain does.
But it doesn’t work that way. In the formalism, p describes your entire brain. (Which is the correct way to formalize it because Omega can look at your entire brain.) Your brain cannot step out of causality and decide to install a different algorithm. Your brain is entirely described by p, and it’s doing exactly what p does, which is also what Omega predicted.
If it helps, you can forget about the “decision algorithm” abstraction altogether. Your brain is a deterministic system; Omega predicted what it does at t2, it will do exactly that thing. You cannot decide to do something other than the deterministic output of your brain.