I am sorry, I cannot understand what you are getting at in either of your paragraphs.
In the first one, are you arguing that the original Newcomb problem is contradictory? The problem assumes that Omega can predict your behavior. Presumablythis is not done magically but by knowing your initial state and running some sort of simulation. Here the initial state is defined as everything that affects your choice (otherwise Omega wouldn’t be accurate) so if there is a Mentok, his initial state is included as well. I fail to see any contradiction.
In the second one, I agree with “The strength of the connection between the causal nodes makes a big difference in practice.” but fail to see the relevance (I would say we are assuming in these problems that the connection is very strong in both Newcomb and Smoking), and cannot parse at all your reasoning in the last sentence. Could you elaborate?
In the first one, are you arguing that the original Newcomb problem is contradictory?
My argument is that Newcomb’s Problem rests on these assumptions:
Omega is a perfect predictor of whether or not you will take the second box.
Omega’s prediction determines whether or not he fills the second box.
There’s a hidden assumption that many people import: “Causality cannot flow backwards in time,” or “Omega doesn’t uses magic,” which makes the problem troubling. If you draw a causal arrow from your choice to the second box, then everything is clear and the decision is obvious.
If you try to import other nodes, then you run into trouble: if Omega’s prediction is based on some third thing, it either is the choice in disguise (and so you’ve complicated the problem to avoid magic by waving your hands) or it could be fooled (and so it’s not a Newcomb’s Problem so much as a “how can I trick Omega?” problem). You don’t want to be in the situation where you’re changing your node definition to deal with “what if X happens?”
For example, consider the question of what happens when you commit to a mixed strategy- flipping an unentangled qubit, and one-boxing on up and two-boxing on down. If Omega uses magic, he predicts the outcome of the qubit, and you either get a thousand dollars or a million dollars. If Omega uses some deterministic prediction method, he can’t be certain to predict correctly- so you can’t describe the original Newcomb’s problem that way, and any inferences you draw about the pseudo-Newcomb’s problem may not generalize.
OK, I understand now. I agree that the problem needs a bit of specification. If we treat the assumption that Omega is a perfect (or quasi-perfect) predictor as fixed, I see two possibilities:
Omega predicts by taking a sufficiently inclusive initial state and running a simulation. The initial state must include everything that predictably affects your choice (e.g. Mentok, or classical coin flips), so there is no trickery like “adding nodes” possible. The assumption of a Predictor requires that your choice is deterministic: either quantum mechanics is wrong, or Omega only offers the problem in the first place to people whose choice will not depend on quantum effects. So you cannot (or “will not”) use the qubit strategy.
Omega predicts by magic. I don’t know how magic works, but assuming it is more or less my choice affecting the prediction directly in an effective back-in-time causation, then the one-box solution becomes trivial as you say.
So I think the first interpretation is the one that makes the problem interesting. I was assuming it in my analogy to Smoking.
I am sorry, I cannot understand what you are getting at in either of your paragraphs.
In the first one, are you arguing that the original Newcomb problem is contradictory? The problem assumes that Omega can predict your behavior. Presumablythis is not done magically but by knowing your initial state and running some sort of simulation. Here the initial state is defined as everything that affects your choice (otherwise Omega wouldn’t be accurate) so if there is a Mentok, his initial state is included as well. I fail to see any contradiction.
In the second one, I agree with “The strength of the connection between the causal nodes makes a big difference in practice.” but fail to see the relevance (I would say we are assuming in these problems that the connection is very strong in both Newcomb and Smoking), and cannot parse at all your reasoning in the last sentence. Could you elaborate?
My argument is that Newcomb’s Problem rests on these assumptions:
Omega is a perfect predictor of whether or not you will take the second box.
Omega’s prediction determines whether or not he fills the second box.
There’s a hidden assumption that many people import: “Causality cannot flow backwards in time,” or “Omega doesn’t uses magic,” which makes the problem troubling. If you draw a causal arrow from your choice to the second box, then everything is clear and the decision is obvious.
If you try to import other nodes, then you run into trouble: if Omega’s prediction is based on some third thing, it either is the choice in disguise (and so you’ve complicated the problem to avoid magic by waving your hands) or it could be fooled (and so it’s not a Newcomb’s Problem so much as a “how can I trick Omega?” problem). You don’t want to be in the situation where you’re changing your node definition to deal with “what if X happens?”
For example, consider the question of what happens when you commit to a mixed strategy- flipping an unentangled qubit, and one-boxing on up and two-boxing on down. If Omega uses magic, he predicts the outcome of the qubit, and you either get a thousand dollars or a million dollars. If Omega uses some deterministic prediction method, he can’t be certain to predict correctly- so you can’t describe the original Newcomb’s problem that way, and any inferences you draw about the pseudo-Newcomb’s problem may not generalize.
OK, I understand now. I agree that the problem needs a bit of specification. If we treat the assumption that Omega is a perfect (or quasi-perfect) predictor as fixed, I see two possibilities:
Omega predicts by taking a sufficiently inclusive initial state and running a simulation. The initial state must include everything that predictably affects your choice (e.g. Mentok, or classical coin flips), so there is no trickery like “adding nodes” possible. The assumption of a Predictor requires that your choice is deterministic: either quantum mechanics is wrong, or Omega only offers the problem in the first place to people whose choice will not depend on quantum effects. So you cannot (or “will not”) use the qubit strategy.
Omega predicts by magic. I don’t know how magic works, but assuming it is more or less my choice affecting the prediction directly in an effective back-in-time causation, then the one-box solution becomes trivial as you say.
So I think the first interpretation is the one that makes the problem interesting. I was assuming it in my analogy to Smoking.