In the traditional formulation of Newcomb’s Problem (at least here on Less Wrong), if Omega predicts you’ll use a randomizer, it will leave box B empty.
That’s weird. Assuming human decision making is caused by neural processes, which aren’t perfectly reliable, there’d be no way for a human to not use a randomizer.
In the traditional formulation of Newcomb’s Problem (at least here on Less Wrong), if Omega predicts you’ll use a randomizer, it will leave box B empty.
That’s weird. Assuming human decision making is caused by neural processes, which aren’t perfectly reliable, there’d be no way for a human to not use a randomizer.
We assume that Omega is powerful enough to simulate your brain and the environment precisely, and that quantumness is negligible.
In that case, you could still say that there’s no way not to use a randomizer, but Omega would be using the same randomizer with the same seed.
If you use flipping a coin as a randomizer, Omega could simulate that too. But traditionally using coins doesn’t fly while using brains is okay.