I don’t think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of “choosing” them. Actually, whether or not there is an Omega predicting your actions, this may still be true.
Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of “choosing” something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I’m no expert.
The real causal model is that some set of circumstances decided what you were going to “choose” when presented with Omega’s deal, and those circumstances also led to Omega’s 100% accurate prediction.
If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb’s problem disappears.
But in the problem as stated, you will only two-box if you get confused about the situation or you don’t want $1M for some reason.
“then your actions are decided before you experience the illusion of “choosing” them.”
Where’s the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences?
Why does their knowledge of my action affect my decision-making powers?
The problem is you’re using the words “decided” and “choosing” confusingly with—different meanings at the same time. One meaning is having the final input on the action I take—the other meaning seems to be a discussion of when the output can be calculated.
The output can be calculated before I actually even insert the input, sure—but it’s still my input, and therefore my decision—nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.
The knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely.
What happens next is exactly the everyday meaning of “choosing”. Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That’s one part of the illusion of choice.
EDIT: I’m assuming you’re a human. A rational agent need not have this incredibly clunky architecture.
The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST. It has probability 0. It’s not even that it could have happened in another branch of the multiverse—it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 − 1 = 0.
The knowledge of your future action is only knowledge if it has a probability of 1.
Do you think Newcomb’s Box fundamentally changes if Omega is only right with a probability of 99.9999999999999%?
Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it.
That process “is” my mind—there’s no mind anywhere which can be separate from those signals. So you say that my mind feels like it made a decision but you think this is false? I think it makes sense to say that my mind feels like it made a decision and it’s completely right most of the time.
My mind would be only having the “illusion” of choice if someone else, someone outside my mind, intervened between the signals and implanted a different decision, according to their own desires, and the rest of my brain just rationalized the already pretaken choice. But as long as the process is truly internal, the process is truly my mind’s—and my mind’s feeling that it made the choice corresponds to reality.
“The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST.”
That the opposite choice isn’t made in any universe, doesn’t mean that the actually made choice isn’t real—indeed the less real the opposite choice, the more real your actual choice.
Taboo the word “choice”, and let’s talk about “decision-making process”. Your decision-making process exists in your brain, and therefore it’s real. It doesn’t have to be uncertain in outcome to be real—it’s real in the sense that it is actually occuring. Occuring in a deterministic manner, YES—but how does that make the process any less real?
Is gravity unreal or illusionary because it’s deterministic and predictable? No. Then neither is your decision-making process unreal or illusionary.
Yes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn’t actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn’t really happening somewhat illusory.
The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn’t true) is inaccurate. Taboo “illusion” too if you like, but we can probably agree to call that a different preference for usage of the words and move on.
Incidentally, I don’t think Newcomb’s problem changes dramatically as Omega’s success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don’t you?
I don’t think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of “choosing” them. Actually, whether or not there is an Omega predicting your actions, this may still be true.
Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of “choosing” something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I’m no expert.
The real causal model is that some set of circumstances decided what you were going to “choose” when presented with Omega’s deal, and those circumstances also led to Omega’s 100% accurate prediction.
If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb’s problem disappears.
But in the problem as stated, you will only two-box if you get confused about the situation or you don’t want $1M for some reason.
Where’s the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences? Why does their knowledge of my action affect my decision-making powers?
The problem is you’re using the words “decided” and “choosing” confusingly with—different meanings at the same time. One meaning is having the final input on the action I take—the other meaning seems to be a discussion of when the output can be calculated.
The output can be calculated before I actually even insert the input, sure—but it’s still my input, and therefore my decision—nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.
The knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely.
What happens next is exactly the everyday meaning of “choosing”. Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That’s one part of the illusion of choice.
EDIT: I’m assuming you’re a human. A rational agent need not have this incredibly clunky architecture.
The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST. It has probability 0. It’s not even that it could have happened in another branch of the multiverse—it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 − 1 = 0.
Do you think Newcomb’s Box fundamentally changes if Omega is only right with a probability of 99.9999999999999%?
That process “is” my mind—there’s no mind anywhere which can be separate from those signals. So you say that my mind feels like it made a decision but you think this is false? I think it makes sense to say that my mind feels like it made a decision and it’s completely right most of the time.
My mind would be only having the “illusion” of choice if someone else, someone outside my mind, intervened between the signals and implanted a different decision, according to their own desires, and the rest of my brain just rationalized the already pretaken choice. But as long as the process is truly internal, the process is truly my mind’s—and my mind’s feeling that it made the choice corresponds to reality.
That the opposite choice isn’t made in any universe, doesn’t mean that the actually made choice isn’t real—indeed the less real the opposite choice, the more real your actual choice.
Taboo the word “choice”, and let’s talk about “decision-making process”. Your decision-making process exists in your brain, and therefore it’s real. It doesn’t have to be uncertain in outcome to be real—it’s real in the sense that it is actually occuring. Occuring in a deterministic manner, YES—but how does that make the process any less real?
Is gravity unreal or illusionary because it’s deterministic and predictable? No. Then neither is your decision-making process unreal or illusionary.
Yes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn’t actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn’t really happening somewhat illusory.
The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn’t true) is inaccurate. Taboo “illusion” too if you like, but we can probably agree to call that a different preference for usage of the words and move on.
Incidentally, I don’t think Newcomb’s problem changes dramatically as Omega’s success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don’t you?