So the source-code of your brain just needs to decide whether it’ll be a source-code that will be one-boxing or not.
First, in the classic Newcomb when you meet Omega that’s a surprise to you. You don’t get to precommit to deciding one way or the other because you had no idea such a situation will arise: you just get to decide now.
You can decide however whether you’re the sort of person who accepts their decisions can be deterministically predicted in advance with sufficient certainty, or whether you’ll be claiming that other people predicting your choice must be a violation of causality (it’s not).
Why would you make such a decision if you don’t expect to meet Omega and don’t care much about philosophical head-scratchers?
And, by the way, predicting your choice is not a violation of causality, but believing that your choice (of the boxes, not of the source code) affects what’s in the boxes is.
Second, you are assuming that the brain is free to reconfigure and rewrite its software which is clearly not true for humans and all existing agents.
First, in the classic Newcomb when you meet Omega that’s a surprise to you. You don’t get to precommit to deciding one way or the other because you had no idea such a situation will arise: you just get to decide now.
Why would you make such a decision if you don’t expect to meet Omega and don’t care much about philosophical head-scratchers?
And, by the way, predicting your choice is not a violation of causality, but believing that your choice (of the boxes, not of the source code) affects what’s in the boxes is.
Second, you are assuming that the brain is free to reconfigure and rewrite its software which is clearly not true for humans and all existing agents.