If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target.
As to the second, it’s not really a matter of the question making sense. It’s a well-formed English sentence, its meaning is clear, it can be answered, and so on.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z.
The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”. The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed[...]
Why? I don’t understand why the answers are disconnected from the scenario. Why isn’t all of this included in the concept of a perfect predictor?
[...], in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
So… what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn’t have asked you in that scenario. It would have been able to predict your response and would have known better than to ask.
Suppose I ask you to choose a letter[...] which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
Hmm. Okay, that makes sense.
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”.
Are you saying that it doesn’t matter for the question, “Which string choice has the highest expected value?” or the question, “What do you do?” My guess is the latter.
The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
Okay, but I don’t understand how this distinguishes the two questions. If I asked, “What do you do?” what am I asking? Since it’s not “Which string scores best?”
My impression was that asking, “What do you do?” is asking for a decision between all possible end states. Apparently this was a bad impression?
From a standpoint of the psychology of language, when you ask “What do you do”, you’re asking me to envision a plausible scenario—basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega’s prediction, my brain will want to give that answer.
When you ask “What do you do”, you’re talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the “problem”).
If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I’m more likely to give the correct answer.
Oh. Okay, yeah, I guess I wasn’t looking for an answer in terms of “What verbal response do you give to my post?” I was looking for an answer strictly in terms of possible scenarios.
Is there a better way to convey that than “What do you do?” Or am I still missing something? Or… ?
If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target.
As to the second, it’s not really a matter of the question making sense. It’s a well-formed English sentence, its meaning is clear, it can be answered, and so on.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z.
The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”. The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
Why? I don’t understand why the answers are disconnected from the scenario. Why isn’t all of this included in the concept of a perfect predictor?
So… what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn’t have asked you in that scenario. It would have been able to predict your response and would have known better than to ask.
Hmm. Okay, that makes sense.
Are you saying that it doesn’t matter for the question, “Which string choice has the highest expected value?” or the question, “What do you do?” My guess is the latter.
Okay, but I don’t understand how this distinguishes the two questions. If I asked, “What do you do?” what am I asking? Since it’s not “Which string scores best?”
My impression was that asking, “What do you do?” is asking for a decision between all possible end states. Apparently this was a bad impression?
From a standpoint of the psychology of language, when you ask “What do you do”, you’re asking me to envision a plausible scenario—basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega’s prediction, my brain will want to give that answer.
When you ask “What do you do”, you’re talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the “problem”).
If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I’m more likely to give the correct answer.
Oh. Okay, yeah, I guess I wasn’t looking for an answer in terms of “What verbal response do you give to my post?” I was looking for an answer strictly in terms of possible scenarios.
Is there a better way to convey that than “What do you do?” Or am I still missing something? Or… ?