I do not agree that a CDT must conclude that P(A)+P(B) = 1. The argument only holds if you assume the agent’s decision is perfectly unpredictable, i.e. that there can be no correlation between the prediction and the decision. This contradicts one of the premises of Newcomb’s Paradox, which assumes an entity with exactly the power to predict the agent’s choice. Incidentally, this reduces to the (b) but not (a) from above.
By adopting my (a) but not (b) from above, i.e. Omega as a programmer and the agent as predictable code, you can easily see that P(A)+P(B) = 2, which means one-boxing code will perform the best.
But that’s not CDT reasoning. CDT uses surgery instead of conditionalization, that’s the whole point. So it doesn’t look at P(prediction = A|A), but at P(prediction = A|do(A)) = P(prediction = A).
Your example with the cab doesn’t really involve a choice at all, because John’s going to work is effectively determined completely by the arrival of the cab.
I am not sure where our disagreement lies at the moment.
Are you using choice to signify strongly free will? Because that means the hypothetical Omega is impossible without backwards causation, leaving us at (b) but not (a) and the whole of Newcomb’s paradox moot. Whereas, if you include in Newcomb’s paradox, the choice of two-boxing will actually cause the big box to be empty, whereas the choice of one-boxing will actually cause the big box to contain a million dollars by a mechanism of backwards causation, then any CDT model will solve the problem.
Perhaps we can narrow down our disagreement by taking the following variation of my example, where there is at least a bit more of choice involved:
Imagine John, who never understood why he gets thirsty. Despite there being a regularity in when he chooses to drink, this is for him a mystery. Every hour, Omega must predict whether John will choose to drink within the next hour. Omega’s prediction is made secret to John until after the time interval has passed. Omega and John play this game every hour for a month, and it turns out that while far from perfect, Omega’s predictions are a bit better than random. Afterwards, Omega explains that it beats blind guesses by knowing that John will very rarely wake up in the middle of the night to drink, and that his daily water consumption follows a normal distribution with a mean and standard deviation that Omega has estimated.
I am not sure where our disagreement lies at the moment.
I’m not entirely sure either. I was just saying that a causal decision theorist will not be moved by Wildberger’s reasoning, because he’ll say that Wildberger is plugging in the wrong probabilities: when calculating an expectation, CDT uses not conditional probability distributions but surgically altered probability distributions. You can make that result in one-boxing if you assume backwards causation.
I think the point we’re actually talking about (or around) might be the question of how CDT reasoning relates to you (a). I’m not sure that the causal decision theorist has to grant that he is in fact interpreting the problem as “not (a) but (b)”. The problem specification only contains the information that so far, Omega has always made correct predictions. But the causal decision theorist is now in a position to spoil Omega’s record, if you will. Omega has already made a prediction, and whatever the causal decision theorist does now isn’t going to change that prediction. The fact that Omega’s predictions have been absolutely correct so far doesn’t enter into the picture. It just means that for all agents x that are not the causal decision theorist, P(x does A|Omega predicts that x does A) = 1 (and the same for B, and whatever value than 1 you might want for an imperfect predictor Omega).
About the way you intend (a), the causal decision theorist would probably say that’s backward causation and refuse to accept it.
One way of putting it might be that the causal decision theorist simply has no way of reasoning with the information that his choice is predetermined, which is what I think you intend to convey with (a). Therefore, he has no way of (hypothetically) inferring Omega’s prediction from his own (hypothetical) action (because he’s only allowed to do surgery, not conditionalization).
Are you using choice to signify strongly free will?
No, actually. Just the occurrence of a deliberation process whose outcome is not immediately obvious. In both your examples, that doesn’t happen: John’s behavior simply depends on the arrival of the cab or his feeling of thirst, respectively. He doesn’t, in a substantial sense, make a decision.
I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb’s paradox is that, in Newcomb’s paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.
From Omega’s point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality. That is what allows Omega its predictive power. Choice is not something inherent to a system, but a feature of an outsider’s model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.
As for the rest of our disagreement, I am not sure why you insist that CDT must work with a misleading model. The standard formulation of Newcomb’s paradox is inconsistent or underspecified. Here are some messy explanations for why, in list form:
Omega predicts accurately, then you get to choose is a false model, because Omega has predicted you will two-box, then you get to choose does not actually let you choose; one-boxing is an illegal choice, and two-boxing the only legal choice (In Soviet Russia joke goes here)
You get to choose, then Omega retroactively fixes the contents of the boxes is fine and CDT solves it by one-boxing
Omega tries to predict but is just blindly guessing, then you really get to choose is fine and CDT solves it by two-boxing
You know that Omega has perfect predictive power and are free to be committed to either one- or two-boxing as you prefer is nowhere near similar to the original Newcomb’s formulation, but is obviously solved by one-boxing
You are not sure about Omega’s predictive power and are torn between trying to ‘game’ it and cooperating with it is not Newcomb’s problem
Your choice has to be determined by a deterministic algorithm, but you are not allowed to know this when designing the algorithm, so you must instead work in ignorance and design it by a false dominance principle is just cheating
Omega predicts accurately, then you get to choose is a false model, because Omega has predicted you will two-box, then you get to choose does not actually let you choose; one-boxing is an illegal choice, and two-boxing the only legal choice (In Soviet Russia joke goes here)
Not if you’re a compatibilist, which Eliezer is last I checked.
The post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words.
Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb’s problem is that you are feeding it the wrong model. http://lesswrong.com/lw/gu1/decision_theory_faq/8kef elaborates on this without muddying the waters too much with the vaguely defined terms.
I don’t think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of “choosing” them. Actually, whether or not there is an Omega predicting your actions, this may still be true.
Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of “choosing” something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I’m no expert.
The real causal model is that some set of circumstances decided what you were going to “choose” when presented with Omega’s deal, and those circumstances also led to Omega’s 100% accurate prediction.
If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb’s problem disappears.
But in the problem as stated, you will only two-box if you get confused about the situation or you don’t want $1M for some reason.
“then your actions are decided before you experience the illusion of “choosing” them.”
Where’s the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences?
Why does their knowledge of my action affect my decision-making powers?
The problem is you’re using the words “decided” and “choosing” confusingly with—different meanings at the same time. One meaning is having the final input on the action I take—the other meaning seems to be a discussion of when the output can be calculated.
The output can be calculated before I actually even insert the input, sure—but it’s still my input, and therefore my decision—nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.
The knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely.
What happens next is exactly the everyday meaning of “choosing”. Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That’s one part of the illusion of choice.
EDIT: I’m assuming you’re a human. A rational agent need not have this incredibly clunky architecture.
The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST. It has probability 0. It’s not even that it could have happened in another branch of the multiverse—it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 − 1 = 0.
The knowledge of your future action is only knowledge if it has a probability of 1.
Do you think Newcomb’s Box fundamentally changes if Omega is only right with a probability of 99.9999999999999%?
Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it.
That process “is” my mind—there’s no mind anywhere which can be separate from those signals. So you say that my mind feels like it made a decision but you think this is false? I think it makes sense to say that my mind feels like it made a decision and it’s completely right most of the time.
My mind would be only having the “illusion” of choice if someone else, someone outside my mind, intervened between the signals and implanted a different decision, according to their own desires, and the rest of my brain just rationalized the already pretaken choice. But as long as the process is truly internal, the process is truly my mind’s—and my mind’s feeling that it made the choice corresponds to reality.
“The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST.”
That the opposite choice isn’t made in any universe, doesn’t mean that the actually made choice isn’t real—indeed the less real the opposite choice, the more real your actual choice.
Taboo the word “choice”, and let’s talk about “decision-making process”. Your decision-making process exists in your brain, and therefore it’s real. It doesn’t have to be uncertain in outcome to be real—it’s real in the sense that it is actually occuring. Occuring in a deterministic manner, YES—but how does that make the process any less real?
Is gravity unreal or illusionary because it’s deterministic and predictable? No. Then neither is your decision-making process unreal or illusionary.
Yes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn’t actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn’t really happening somewhat illusory.
The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn’t true) is inaccurate. Taboo “illusion” too if you like, but we can probably agree to call that a different preference for usage of the words and move on.
Incidentally, I don’t think Newcomb’s problem changes dramatically as Omega’s success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don’t you?
Regarding illegal choices, the transparent variation makes it particularly clear, i.e. you can’t take both boxes if you see a million in first box, and take 1 box otherwise.
You can walk backwards from your decision to the point where a copy of you had been made, and then forward to the point where a copy is processed by the Omega, to find the relation of your decision to the box state causally.
Underlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic.
If you do process constraints fully then the action to take one box requires pre-copy state of “you” that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. (“you” is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise).
edit: came up with an example. Suppose ‘you’ is a robotics controller, you know you’re made of various electrical components, you’re connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn’t supply enough current. If constraints contradict known reality then you know you can’t do this action. Suppose there’s a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven’t been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn’t need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.
I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb’s problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation.
I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged.
You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either.
I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.
I’d be the first to agree on terminology here. I’m not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word ‘causal’ from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn’t capture very well how physics work.
When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it’s field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future.
So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box.
(And with regards to the “smoking lesion” problem, smoking lesion postulates a blatant logical contradiction—it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb’s leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb’s problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
The problems with EDT are quite ordinary… its looking for good news, and also, it is kind of under-specified (e.g. some argue it’d two-box in Newcomb’s after learning physics). A decision theory can not be disqualified for giving ‘wrong’ answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).
From Omega’s point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality.
I probably wasn’t expressing myself quite clearly. I think the difference is this: Newcomb subjects are making a choice from their own point of view. Your Johns aren’t really make a choice even from their internal perspective: they just see if the cab arrives/if they’re thirsty and then without deliberation follow what their policy for such cases prescribes. I think this difference is substantial enough intuitively so that the John cases can’t be used as intuition pumps for anything relating to Newcomb’s.
The standard formulation of Newcomb’s paradox is inconsistent or underspecified.
I don’t think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you’re right in the process of making the choice. But that’s a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb’s problem in a way that he can use—he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.
Then I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described.
I don’t think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you’re right in the process of making the choice. But that’s a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb’s problem in a way that he can use—he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.
This is not a failure of CDT, but one of your imagination. Here is a simple, five minute model which has no problems conceiving Newcomb’s problem without any backwards causation:
T=0: Subject is initiated in a deterministic state which can be predicted by Omega.
T=1: Omega makes an accurate prediction for the subject’s decision in Newcomb’s problem by magic / simulation / reading code / infallible heuristics. Denote the possible predictions P1 (one-box) and P2.
T=2: Omega sets up Newcomb’s problem with appropriate box contents.
T=3: Omega explains the setup to the subject and disappears.
T=4: Subject deliberates.
T=5: Subject chooses either C1 (one-box) or C2.
T=6: Subject opens box(es) and receives payoff dependent on P and C.
You can pretend to enter this situation at T=4 as suggested by the standard Newcomb’s problem. Then you can use the dominance principle and you will lose. But this just using a terrible model. You entered at T=0, because you were needed at T=1 for Omega’s inspection. If you did not enter the situation at T=0, then you can freely make a choice C at T=5 without any correlation to P, but that is not Newcomb’s problem.
Instead, at T=4 you become aware of the situation, and your decision making algorithm must return a value for C. If you consider this only from T=4 and onward, this is completely uninteresting, because C is already determined. At T=1, P was determined to either P1 or P2, and the value of C follow directly from this. Obviously, healthy one-boxing code wins and unhealthy two-boxing code loses, but there is no choice being made here, just different code with different return values being rewarded differently, and that is not Newcomb’s problem either.
Finally, we will work under illusion of choice with Omega as a perfect predictor. We realize that T=0 is the critical moment, seeing as all subsequent T follows directly from this. We work backwards as follows:
T=6: My preferences are P1C2 > P1C1 > P2C2 > P2C1.
T=5: I should choose either C2 or C1 depending on the current value of P.
T=4: this is when all this introspection is happening
T=3: this is why
T=2: I would really like there to be a million dollars present.
T=1: I want Omega to make prediction P1.
T=0: Whew, I’m glad I could do all this introspection which made me realize that I want P1 and the way to achieve this is C1. It would have been terrible if my decision making just worked by the dominance principle. Luckily, the epiphany I just had, C1, was already predetermined at T=0, Omega would have been aware of this at T=1 and made the prediction P1, so (...) and P1 C1 = a million dollars is mine.
Shorthand version of all the above; if the decision is necessarily predetermined before T=4, then you should not pretend you make the decision at T=4. Insert a decision making step at T=0.5, which causally determines the value of P and C. Apply your CDT to this step.
This is the only way of doing CDT honestly, and it is the slightest bit messy, but that is exactly what happens when you create a reference to the decision the decision theory is going to make in the future in the problem itself with perfect correlation to the decision before the decision has overtly been made. This sort of self reference creates impossibilities out of the thin air every day of week, such as when Pinocchio says my nose will grow now. The good news is that this way of doing it is a lot less messy than inventing a new, superfluous decision theory, and it also allows you to deal with problems like the psychopath button without any trouble whatsoever.
But isn’t this precisely the basic idea behind TDT?
The algorithm you are suggesting goes something like this: Chose that action which, if it had been predetermined at T=0 that you would take it, would lead to the maximal-utility outcome. You can call that CDT, but it isn’t. Sure, it’ll use causal reasoning for evaluating the counterfactual, but not everything that uses causal reasoning is CDT. CDT is surgically altering the action node (and not some precommitment node) and seeing what happens.
If you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb’s problem, but trivially, a deterministic agent must be precommited.
It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but have no free choices to make. For decision theory to have a function, there needs to be a choice to be made between multiple, legal options.
As far as I have understood, your problem is that, if you apply CDT with an action node at T=4, it gives the wrong answer. At T=4, there is only one option to choose, so the choice of decision theory is not exactly critical. If you want to analyse Newcomb’s problem, you have to insert an action node at T<1, while there is still a choice to be made, and CDT will do this admirably.
As far as I have understood, your problem is that, if you apply CDT with an action node at T=4, it gives the wrong answer. At T=4, there is only one option to choose, so the choice of decision theory is not exactly critical.
Yes, it is. The point is that you run your algorithm at T=4, even if it is deterministic and therefore its output is already predetermined. Therefore, you want an algorithm that, executed at T=4, returns one-boxing. CDT does simply not do that.
Ultimately, it seems that we’re disagreeing about terminology. You’re apparently calling something CDT even though it does not work by surgically altering the node for the action under consideration (that action being the choice of box, not the precommitment at T<1) and then looking at the resulting expected utilities.
If you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple:
T’=0: Choose either C1 or C2
T’=1: If C1, then gain 1000. If C2, then gain 1.
This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor.
Okay, so I take it to be the defining characteristic of CDT that it uses of counterfactuals. So far, I have been arguing on the basis of a Pearlean conception of counterfactuals, and then this is what happens:
Your causal network has three variables, A (the algorithm used), P (Omega’s prediction), C (the choice). The causal connections are A → P and A → C. There is no causal connection between P and C.
Now the CDT algorithm looks at counterfactuals with the antecedent C1. In a Pearlean picture, this amounts to surgery on the C-node, so no inference contrary to the direction of causality is possible. Hence, whatever the value of the P-node, it will seem to the CDT algorithm not to depend on the choice.
Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.
Now it turns out that natural language counterfactuals work very much, but not quite like Pearl’s counterfactuals: they allow a limited amount of backtracking contrary to the direction of causality, depending on a variety of psychological factors. So if you had a theory of counterfactuals that allowed backtracking in a case like Newcomb’s problem, then a CDT-algorithm employing that conception of counterfactuals would one-box. The trouble would of course be to correctly state the necessary conditions for backtracking. The messy and diverse psychological and contextual factors that seem to be at play in natural language won’t do.
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb’s problem.
Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer.
No, it does not, that’s what I was trying to explain. It’s what I’ve been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don’t you? In order for CDT to use the correlation, you need a causal arrow from C to P—that amounts to backward causation, which we don’t want. Simple as that.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make.
I’m not sure what the meaning of this is. Of course the decision algorithm is fixed before it’s run, and therefore its output is predetermined. It just doesn’t know its own output before it has computed it. And I’m not trying to figure out what the agent should do—the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using.
PS: The downvote on your post above wasn’t from me.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb’s problem we have been discussing—it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.
It is not the decision theory’s responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order.
If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb’s problem where A and C are fully correlated.
If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.
You don’t promote C to the action node, it is the action node. That’s the way the decision problem is specified: do you one-box or two-box? If you don’t accept that, then you’re talking about a different decision problem. But in Newcomb’s problem, the algorithm is trying to decide that. It’s not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend—as a means of reaching a decision about C—that it’s deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
In AI, you do not discuss it in terms of anthropomorphic “trying to decide”. For example, there’s a “Model based utility based agent” . Computing what the world will be like if a decision is made in a specific way is part of the model of the world, i.e. part of the laws of physics as the agent knows them. If this physics implements the predictor at all, model-based utility-based agent will one-box.
I don’t see at all what’s wrong or confusing about saying that an agent is trying to decide something; or even, for that matter, that an algorithm is trying to decide something, even though that’s not a precise way of speaking.
More to the point, though, doesn’t what you describe fit EDT and CDT both, with each theory having a different way of computing “what the world will be like if the decision is made in a specific way”?
Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.
If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
In a way, albeit it does not resemble how EDT tends to be presented.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
On the CDT, formally speaking, what do you think P(A if B) even is?
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
So then how does it not fall prey to the problems of EDT?
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
That is not the problem at all, it’s perfectly well-defined.
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one.
Yes. That’s basically the definition of CDT. That’s also why CDT is no good. You can quibble about the word but in “the literature”, ‘CDT’ means just that.
Well, a practically important example is a deterministic agent which is copied and then copies play prisoner’s dilemma against each other.
There you have agents that use physics. Those, when evaluating hypothetical choices, use some model of physics, where an agent can model itself as a copyable deterministic process which it can’t directly simulate (i.e. it knows that the matter inside it’s head obeys known laws of physics). In the hypothetical that it cooperates, after processing the physics, it is found that copy cooperates, in the hypothetical that it defects, it is found that copy defects.
And then there’s philosophers. The worse ones don’t know much about causality. They presumably have some sort of ill specified oracle that we don’t know how to construct, which will tell them what is a ‘consequence’ and what is a ‘cause’ , and they’ll only process the ‘consequences’ of the choice as the ‘cause’. This weird oracle tells us that other agent’s choice is not a ‘consequence’ of the decision, so it can not be processed. It’s very silly and not worth spending brain cells on.
Playing prisoner’s dilemma against a copy of yourself is mostly the same problem as Newcomb’s. Instead of Omega’s prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours—or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb’s problem.
Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.
The copy problem is well specified, though. Unlike the “predictor”. I clarified more in private. The worst part about Newcomb’s is that all the ex religious folks seem to substitute something they formerly knew as ‘god’ for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can’t simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it’s initial state was in set A—the states that result in cooperation, in the alternative that it chooses to defect, it’s initial state was in set B—the states that result in defection, and that no state is in both sets.
I’m with incogn on this one: either there is predictability or there is choice; one cannot have both.
Incogn is right in saying that, from omega’s point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic, but this is unclear.
The only way a deterministic system becomes unpredictable is if it incorporates a source of randomness that is stronger than the ability of a given intelligence to predict. There are good reasons to believe that there exist rather simple sources of entropy that are beyond the predictive power of any fixed super-intelligence—this is not just the foundation of cryptography, but is generically studied under the rubric of ‘chaotic dynamical systems’. I suppose you also have to believe that P is not NP. Or maybe I should just mutter ‘Turing Halting Problem’. (unless omega is taken to be a mythical comp-sci “oracle”, in which case you’ve pushed decision theory into that branch of set theory that deals with cardinal numbers larger than the continuum, and I’m pretty sure you are not ready for the dragons that lie there.)
If the agent incorporates such a source of non-determinism, then omega is unable to predict, and the whole paradox falls down. Either omega can predict, in which case EDT, else omega cannot predict, in which case CDT. Duhhh. I’m sort of flabbergasted, because these points seem obvious to me … the Newcomb paradox, as given, seems poorly stated.
Think of real people making choices and you’ll see it’s the other way around. The carefully chosen paths are the predictable ones if you know the variables involved in the choice. To be unpredictable, you need think and choose less.
Hell, the archetypical imagery of someone giving up on choice is them flipping a coin or throwing a dart with closed eyes—in short resorting to unpredictability in order to NOT choose by themselves.
I do not think the standard usage is well defined, and avoiding these terms altogether is not possible, seeing as they are in the definition of the problem we are discussing.
Interpretations of the words and arguments for the claim are the whole content of the ancestor post. Maybe you should start there instead of quoting snippets out of context and linking unrelated fallacies? Perhaps, by specifically stating the better and more standard interpretations?
Huh? Can you explain? Normally, one states that a mechanical device is “predicatable”: given its current state and some effort, one can discover its future state. Machines don’t have the ability to choose. Normally, “choice” is something that only a system possessing free will can have. Is that not the case? Is there some other “standard usage”? Sorry, I’m a newbie here, I honestly don’t know more about this subject, other than what i can deduce by my own wits.
Machines don’t have preferences, by which I mean they have no conscious self-awareness of a preferred state of the world—they can nonetheless execute “if, then, else” instructions.
That such instructions do not follow their preferences (as they lack such) can perhaps be considered sufficient reason to say that machines don’t have the ability to choose—that they’re deterministic doesn’t… “Determining something” and “Choosing something” are synonyms, not opposites after all.
Newcomb’s problem makes the stronger precondition that the agent is both predictable and that in fact one action has been predicted. In that specific situation, it would be hard to argue against that one action being determined and immutable, even if in general there is debate about the relationship between determinism and predictability.
Hmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That’s large, but finite. One may assign some finite complexity to Omega—say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not “God” (infinite, infallible, all-seeing), nor is it an “oracle” (in the computer-science definition of an “oracle”: viz a machine that can decide undecidable computational problems).
I do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia’s version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.
I think I agree, by and large, despite the length of this post.
Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice.
I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computational complexity might be arbitrarily large, even if you have complete knowledge of an algorithm and its input. I can not really see the relevance to my above post.
Finally, I am myself confused as to why you want two different decision theories (CDT and EDT) instead of two different models for the two different problems conflated into the single identifier Newcomb’s paradox. If you assume a perfect predictor, and thus full correlation between prediction and choice, then you have to make sure your model actually reflects that.
Let’s start out with a simple matrix, P/C/1/2 are shorthands for prediction, choice, one-box, two-box.
P1 C1: 1000
P1 C2: 1001
P2 C1: 0
P2 C2: 1
If the value of P is unknown, but independent of C: Dominance principle, C=2, entirely straightforward CDT.
If, however, the value of P is completely correlated with C, then the matrix above is misleading, P and C can not be different and are really only a single variable, which should be wrapped in a single identifier. The matrix you are actually applying CDT to is the following one:
(P&C)1: 1000
(P&C)2: 1
The best choice is (P&C)=1, again by straightforward CDT.
The only failure of CDT is that it gives different, correct solutions to different, problems with a properly defined correlation of prediction and choice. The only advantage of EDT is that it is easier to cheat in this information without noticing it—even when it would be incorrect to do so. It is entirely possible to have a situation where prediction and choice are correlated, but the decision theory is not allowed to know this and must assume that they are uncorrelated. The decision theory should give the wrong answer in this case.
But that’s not CDT reasoning. CDT uses surgery instead of conditionalization, that’s the whole point. So it doesn’t look at P(prediction = A|A), but at P(prediction = A|do(A)) = P(prediction = A).
Your example with the cab doesn’t really involve a choice at all, because John’s going to work is effectively determined completely by the arrival of the cab.
I am not sure where our disagreement lies at the moment.
Are you using choice to signify strongly free will? Because that means the hypothetical Omega is impossible without backwards causation, leaving us at (b) but not (a) and the whole of Newcomb’s paradox moot. Whereas, if you include in Newcomb’s paradox, the choice of two-boxing will actually cause the big box to be empty, whereas the choice of one-boxing will actually cause the big box to contain a million dollars by a mechanism of backwards causation, then any CDT model will solve the problem.
Perhaps we can narrow down our disagreement by taking the following variation of my example, where there is at least a bit more of choice involved:
Imagine John, who never understood why he gets thirsty. Despite there being a regularity in when he chooses to drink, this is for him a mystery. Every hour, Omega must predict whether John will choose to drink within the next hour. Omega’s prediction is made secret to John until after the time interval has passed. Omega and John play this game every hour for a month, and it turns out that while far from perfect, Omega’s predictions are a bit better than random. Afterwards, Omega explains that it beats blind guesses by knowing that John will very rarely wake up in the middle of the night to drink, and that his daily water consumption follows a normal distribution with a mean and standard deviation that Omega has estimated.
I’m not entirely sure either. I was just saying that a causal decision theorist will not be moved by Wildberger’s reasoning, because he’ll say that Wildberger is plugging in the wrong probabilities: when calculating an expectation, CDT uses not conditional probability distributions but surgically altered probability distributions. You can make that result in one-boxing if you assume backwards causation.
I think the point we’re actually talking about (or around) might be the question of how CDT reasoning relates to you (a). I’m not sure that the causal decision theorist has to grant that he is in fact interpreting the problem as “not (a) but (b)”. The problem specification only contains the information that so far, Omega has always made correct predictions. But the causal decision theorist is now in a position to spoil Omega’s record, if you will. Omega has already made a prediction, and whatever the causal decision theorist does now isn’t going to change that prediction. The fact that Omega’s predictions have been absolutely correct so far doesn’t enter into the picture. It just means that for all agents x that are not the causal decision theorist, P(x does A|Omega predicts that x does A) = 1 (and the same for B, and whatever value than 1 you might want for an imperfect predictor Omega).
About the way you intend (a), the causal decision theorist would probably say that’s backward causation and refuse to accept it.
One way of putting it might be that the causal decision theorist simply has no way of reasoning with the information that his choice is predetermined, which is what I think you intend to convey with (a). Therefore, he has no way of (hypothetically) inferring Omega’s prediction from his own (hypothetical) action (because he’s only allowed to do surgery, not conditionalization).
No, actually. Just the occurrence of a deliberation process whose outcome is not immediately obvious. In both your examples, that doesn’t happen: John’s behavior simply depends on the arrival of the cab or his feeling of thirst, respectively. He doesn’t, in a substantial sense, make a decision.
(Thanks for discussing!)
I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb’s paradox is that, in Newcomb’s paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.
From Omega’s point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality. That is what allows Omega its predictive power. Choice is not something inherent to a system, but a feature of an outsider’s model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.
As for the rest of our disagreement, I am not sure why you insist that CDT must work with a misleading model. The standard formulation of Newcomb’s paradox is inconsistent or underspecified. Here are some messy explanations for why, in list form:
Omega predicts accurately, then you get to choose is a false model, because Omega has predicted you will two-box, then you get to choose does not actually let you choose; one-boxing is an illegal choice, and two-boxing the only legal choice (In Soviet Russia joke goes here)
You get to choose, then Omega retroactively fixes the contents of the boxes is fine and CDT solves it by one-boxing
Omega tries to predict but is just blindly guessing, then you really get to choose is fine and CDT solves it by two-boxing
You know that Omega has perfect predictive power and are free to be committed to either one- or two-boxing as you prefer is nowhere near similar to the original Newcomb’s formulation, but is obviously solved by one-boxing
You are not sure about Omega’s predictive power and are torn between trying to ‘game’ it and cooperating with it is not Newcomb’s problem
Your choice has to be determined by a deterministic algorithm, but you are not allowed to know this when designing the algorithm, so you must instead work in ignorance and design it by a false dominance principle is just cheating
Not if you’re a compatibilist, which Eliezer is last I checked.
The post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words.
Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb’s problem is that you are feeding it the wrong model. http://lesswrong.com/lw/gu1/decision_theory_faq/8kef elaborates on this without muddying the waters too much with the vaguely defined terms.
I don’t think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of “choosing” them. Actually, whether or not there is an Omega predicting your actions, this may still be true.
Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of “choosing” something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I’m no expert.
The real causal model is that some set of circumstances decided what you were going to “choose” when presented with Omega’s deal, and those circumstances also led to Omega’s 100% accurate prediction.
If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb’s problem disappears.
But in the problem as stated, you will only two-box if you get confused about the situation or you don’t want $1M for some reason.
Where’s the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences? Why does their knowledge of my action affect my decision-making powers?
The problem is you’re using the words “decided” and “choosing” confusingly with—different meanings at the same time. One meaning is having the final input on the action I take—the other meaning seems to be a discussion of when the output can be calculated.
The output can be calculated before I actually even insert the input, sure—but it’s still my input, and therefore my decision—nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.
The knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely.
What happens next is exactly the everyday meaning of “choosing”. Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will “decide” to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That’s one part of the illusion of choice.
EDIT: I’m assuming you’re a human. A rational agent need not have this incredibly clunky architecture.
The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN’T EXIST. It has probability 0. It’s not even that it could have happened in another branch of the multiverse—it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 − 1 = 0.
Do you think Newcomb’s Box fundamentally changes if Omega is only right with a probability of 99.9999999999999%?
That process “is” my mind—there’s no mind anywhere which can be separate from those signals. So you say that my mind feels like it made a decision but you think this is false? I think it makes sense to say that my mind feels like it made a decision and it’s completely right most of the time.
My mind would be only having the “illusion” of choice if someone else, someone outside my mind, intervened between the signals and implanted a different decision, according to their own desires, and the rest of my brain just rationalized the already pretaken choice. But as long as the process is truly internal, the process is truly my mind’s—and my mind’s feeling that it made the choice corresponds to reality.
That the opposite choice isn’t made in any universe, doesn’t mean that the actually made choice isn’t real—indeed the less real the opposite choice, the more real your actual choice.
Taboo the word “choice”, and let’s talk about “decision-making process”. Your decision-making process exists in your brain, and therefore it’s real. It doesn’t have to be uncertain in outcome to be real—it’s real in the sense that it is actually occuring. Occuring in a deterministic manner, YES—but how does that make the process any less real?
Is gravity unreal or illusionary because it’s deterministic and predictable? No. Then neither is your decision-making process unreal or illusionary.
Yes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn’t actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn’t really happening somewhat illusory.
The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn’t true) is inaccurate. Taboo “illusion” too if you like, but we can probably agree to call that a different preference for usage of the words and move on.
Incidentally, I don’t think Newcomb’s problem changes dramatically as Omega’s success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don’t you?
Regarding illegal choices, the transparent variation makes it particularly clear, i.e. you can’t take both boxes if you see a million in first box, and take 1 box otherwise.
You can walk backwards from your decision to the point where a copy of you had been made, and then forward to the point where a copy is processed by the Omega, to find the relation of your decision to the box state causally.
I agree with the content, though I am not sure if I approve of a terminology where causation traverses time like a two-way street.
Underlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic.
If you do process constraints fully then the action to take one box requires pre-copy state of “you” that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. (“you” is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise).
edit: came up with an example. Suppose ‘you’ is a robotics controller, you know you’re made of various electrical components, you’re connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn’t supply enough current. If constraints contradict known reality then you know you can’t do this action. Suppose there’s a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven’t been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn’t need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.
I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb’s problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation.
I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged.
You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either.
I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.
I’d be the first to agree on terminology here. I’m not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word ‘causal’ from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn’t capture very well how physics work.
When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it’s field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future.
So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box.
(And with regards to the “smoking lesion” problem, smoking lesion postulates a blatant logical contradiction—it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
Excellent.
I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb’s leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb’s problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
The problems with EDT are quite ordinary… its looking for good news, and also, it is kind of under-specified (e.g. some argue it’d two-box in Newcomb’s after learning physics). A decision theory can not be disqualified for giving ‘wrong’ answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).
I probably wasn’t expressing myself quite clearly. I think the difference is this: Newcomb subjects are making a choice from their own point of view. Your Johns aren’t really make a choice even from their internal perspective: they just see if the cab arrives/if they’re thirsty and then without deliberation follow what their policy for such cases prescribes. I think this difference is substantial enough intuitively so that the John cases can’t be used as intuition pumps for anything relating to Newcomb’s.
I don’t think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you’re right in the process of making the choice. But that’s a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb’s problem in a way that he can use—he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.
Then I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described.
This is not a failure of CDT, but one of your imagination. Here is a simple, five minute model which has no problems conceiving Newcomb’s problem without any backwards causation:
T=0: Subject is initiated in a deterministic state which can be predicted by Omega.
T=1: Omega makes an accurate prediction for the subject’s decision in Newcomb’s problem by magic / simulation / reading code / infallible heuristics. Denote the possible predictions P1 (one-box) and P2.
T=2: Omega sets up Newcomb’s problem with appropriate box contents.
T=3: Omega explains the setup to the subject and disappears.
T=4: Subject deliberates.
T=5: Subject chooses either C1 (one-box) or C2.
T=6: Subject opens box(es) and receives payoff dependent on P and C.
You can pretend to enter this situation at T=4 as suggested by the standard Newcomb’s problem. Then you can use the dominance principle and you will lose. But this just using a terrible model. You entered at T=0, because you were needed at T=1 for Omega’s inspection. If you did not enter the situation at T=0, then you can freely make a choice C at T=5 without any correlation to P, but that is not Newcomb’s problem.
Instead, at T=4 you become aware of the situation, and your decision making algorithm must return a value for C. If you consider this only from T=4 and onward, this is completely uninteresting, because C is already determined. At T=1, P was determined to either P1 or P2, and the value of C follow directly from this. Obviously, healthy one-boxing code wins and unhealthy two-boxing code loses, but there is no choice being made here, just different code with different return values being rewarded differently, and that is not Newcomb’s problem either.
Finally, we will work under illusion of choice with Omega as a perfect predictor. We realize that T=0 is the critical moment, seeing as all subsequent T follows directly from this. We work backwards as follows:
T=6: My preferences are P1C2 > P1C1 > P2C2 > P2C1.
T=5: I should choose either C2 or C1 depending on the current value of P.
T=4: this is when all this introspection is happening
T=3: this is why
T=2: I would really like there to be a million dollars present.
T=1: I want Omega to make prediction P1.
T=0: Whew, I’m glad I could do all this introspection which made me realize that I want P1 and the way to achieve this is C1. It would have been terrible if my decision making just worked by the dominance principle. Luckily, the epiphany I just had, C1, was already predetermined at T=0, Omega would have been aware of this at T=1 and made the prediction P1, so (...) and P1 C1 = a million dollars is mine.
Shorthand version of all the above; if the decision is necessarily predetermined before T=4, then you should not pretend you make the decision at T=4. Insert a decision making step at T=0.5, which causally determines the value of P and C. Apply your CDT to this step.
This is the only way of doing CDT honestly, and it is the slightest bit messy, but that is exactly what happens when you create a reference to the decision the decision theory is going to make in the future in the problem itself with perfect correlation to the decision before the decision has overtly been made. This sort of self reference creates impossibilities out of the thin air every day of week, such as when Pinocchio says my nose will grow now. The good news is that this way of doing it is a lot less messy than inventing a new, superfluous decision theory, and it also allows you to deal with problems like the psychopath button without any trouble whatsoever.
But isn’t this precisely the basic idea behind TDT?
The algorithm you are suggesting goes something like this: Chose that action which, if it had been predetermined at T=0 that you would take it, would lead to the maximal-utility outcome. You can call that CDT, but it isn’t. Sure, it’ll use causal reasoning for evaluating the counterfactual, but not everything that uses causal reasoning is CDT. CDT is surgically altering the action node (and not some precommitment node) and seeing what happens.
If you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb’s problem, but trivially, a deterministic agent must be precommited.
It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but have no free choices to make. For decision theory to have a function, there needs to be a choice to be made between multiple, legal options.
As far as I have understood, your problem is that, if you apply CDT with an action node at T=4, it gives the wrong answer. At T=4, there is only one option to choose, so the choice of decision theory is not exactly critical. If you want to analyse Newcomb’s problem, you have to insert an action node at T<1, while there is still a choice to be made, and CDT will do this admirably.
Yes, it is. The point is that you run your algorithm at T=4, even if it is deterministic and therefore its output is already predetermined. Therefore, you want an algorithm that, executed at T=4, returns one-boxing. CDT does simply not do that.
Ultimately, it seems that we’re disagreeing about terminology. You’re apparently calling something CDT even though it does not work by surgically altering the node for the action under consideration (that action being the choice of box, not the precommitment at T<1) and then looking at the resulting expected utilities.
If you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple:
T’=0: Choose either C1 or C2
T’=1: If C1, then gain 1000. If C2, then gain 1.
This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor.
What is your problem with this model?
Okay, so I take it to be the defining characteristic of CDT that it uses of counterfactuals. So far, I have been arguing on the basis of a Pearlean conception of counterfactuals, and then this is what happens:
Your causal network has three variables, A (the algorithm used), P (Omega’s prediction), C (the choice). The causal connections are A → P and A → C. There is no causal connection between P and C.
Now the CDT algorithm looks at counterfactuals with the antecedent C1. In a Pearlean picture, this amounts to surgery on the C-node, so no inference contrary to the direction of causality is possible. Hence, whatever the value of the P-node, it will seem to the CDT algorithm not to depend on the choice.
Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.
Now it turns out that natural language counterfactuals work very much, but not quite like Pearl’s counterfactuals: they allow a limited amount of backtracking contrary to the direction of causality, depending on a variety of psychological factors. So if you had a theory of counterfactuals that allowed backtracking in a case like Newcomb’s problem, then a CDT-algorithm employing that conception of counterfactuals would one-box. The trouble would of course be to correctly state the necessary conditions for backtracking. The messy and diverse psychological and contextual factors that seem to be at play in natural language won’t do.
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb’s problem.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
No, it does not, that’s what I was trying to explain. It’s what I’ve been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don’t you? In order for CDT to use the correlation, you need a causal arrow from C to P—that amounts to backward causation, which we don’t want. Simple as that.
I’m not sure what the meaning of this is. Of course the decision algorithm is fixed before it’s run, and therefore its output is predetermined. It just doesn’t know its own output before it has computed it. And I’m not trying to figure out what the agent should do—the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using.
PS: The downvote on your post above wasn’t from me.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb’s problem we have been discussing—it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.
It is not the decision theory’s responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order.
If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb’s problem where A and C are fully correlated.
If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.
You don’t promote C to the action node, it is the action node. That’s the way the decision problem is specified: do you one-box or two-box? If you don’t accept that, then you’re talking about a different decision problem. But in Newcomb’s problem, the algorithm is trying to decide that. It’s not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend—as a means of reaching a decision about C—that it’s deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
In AI, you do not discuss it in terms of anthropomorphic “trying to decide”. For example, there’s a “Model based utility based agent” . Computing what the world will be like if a decision is made in a specific way is part of the model of the world, i.e. part of the laws of physics as the agent knows them. If this physics implements the predictor at all, model-based utility-based agent will one-box.
I don’t see at all what’s wrong or confusing about saying that an agent is trying to decide something; or even, for that matter, that an algorithm is trying to decide something, even though that’s not a precise way of speaking.
More to the point, though, doesn’t what you describe fit EDT and CDT both, with each theory having a different way of computing “what the world will be like if the decision is made in a specific way”?
Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.
If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
So it’s basically EDT, where you just conditionalize on the action being performed?
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.
Yes. That’s basically the definition of CDT. That’s also why CDT is no good. You can quibble about the word but in “the literature”, ‘CDT’ means just that.
This only shows that the model is no good, because the model does not respect the assumptions of the decision theory.
Well, a practically important example is a deterministic agent which is copied and then copies play prisoner’s dilemma against each other.
There you have agents that use physics. Those, when evaluating hypothetical choices, use some model of physics, where an agent can model itself as a copyable deterministic process which it can’t directly simulate (i.e. it knows that the matter inside it’s head obeys known laws of physics). In the hypothetical that it cooperates, after processing the physics, it is found that copy cooperates, in the hypothetical that it defects, it is found that copy defects.
And then there’s philosophers. The worse ones don’t know much about causality. They presumably have some sort of ill specified oracle that we don’t know how to construct, which will tell them what is a ‘consequence’ and what is a ‘cause’ , and they’ll only process the ‘consequences’ of the choice as the ‘cause’. This weird oracle tells us that other agent’s choice is not a ‘consequence’ of the decision, so it can not be processed. It’s very silly and not worth spending brain cells on.
Playing prisoner’s dilemma against a copy of yourself is mostly the same problem as Newcomb’s. Instead of Omega’s prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours—or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb’s problem.
Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.
The copy problem is well specified, though. Unlike the “predictor”. I clarified more in private. The worst part about Newcomb’s is that all the ex religious folks seem to substitute something they formerly knew as ‘god’ for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can’t simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it’s initial state was in set A—the states that result in cooperation, in the alternative that it chooses to defect, it’s initial state was in set B—the states that result in defection, and that no state is in both sets.
I’m with incogn on this one: either there is predictability or there is choice; one cannot have both.
Incogn is right in saying that, from omega’s point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic, but this is unclear.
The only way a deterministic system becomes unpredictable is if it incorporates a source of randomness that is stronger than the ability of a given intelligence to predict. There are good reasons to believe that there exist rather simple sources of entropy that are beyond the predictive power of any fixed super-intelligence—this is not just the foundation of cryptography, but is generically studied under the rubric of ‘chaotic dynamical systems’. I suppose you also have to believe that P is not NP. Or maybe I should just mutter ‘Turing Halting Problem’. (unless omega is taken to be a mythical comp-sci “oracle”, in which case you’ve pushed decision theory into that branch of set theory that deals with cardinal numbers larger than the continuum, and I’m pretty sure you are not ready for the dragons that lie there.)
If the agent incorporates such a source of non-determinism, then omega is unable to predict, and the whole paradox falls down. Either omega can predict, in which case EDT, else omega cannot predict, in which case CDT. Duhhh. I’m sort of flabbergasted, because these points seem obvious to me … the Newcomb paradox, as given, seems poorly stated.
Think of real people making choices and you’ll see it’s the other way around. The carefully chosen paths are the predictable ones if you know the variables involved in the choice. To be unpredictable, you need think and choose less.
Hell, the archetypical imagery of someone giving up on choice is them flipping a coin or throwing a dart with closed eyes—in short resorting to unpredictability in order to NOT choose by themselves.
Either your claim is false or you are using a definition of at least one of those two words that means something different to the standard usage.
I do not think the standard usage is well defined, and avoiding these terms altogether is not possible, seeing as they are in the definition of the problem we are discussing.
Interpretations of the words and arguments for the claim are the whole content of the ancestor post. Maybe you should start there instead of quoting snippets out of context and linking unrelated fallacies? Perhaps, by specifically stating the better and more standard interpretations?
Huh? Can you explain? Normally, one states that a mechanical device is “predicatable”: given its current state and some effort, one can discover its future state. Machines don’t have the ability to choose. Normally, “choice” is something that only a system possessing free will can have. Is that not the case? Is there some other “standard usage”? Sorry, I’m a newbie here, I honestly don’t know more about this subject, other than what i can deduce by my own wits.
Machines don’t have preferences, by which I mean they have no conscious self-awareness of a preferred state of the world—they can nonetheless execute “if, then, else” instructions.
That such instructions do not follow their preferences (as they lack such) can perhaps be considered sufficient reason to say that machines don’t have the ability to choose—that they’re deterministic doesn’t… “Determining something” and “Choosing something” are synonyms, not opposites after all.
Newcomb’s problem makes the stronger precondition that the agent is both predictable and that in fact one action has been predicted. In that specific situation, it would be hard to argue against that one action being determined and immutable, even if in general there is debate about the relationship between determinism and predictability.
Hmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That’s large, but finite. One may assign some finite complexity to Omega—say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not “God” (infinite, infallible, all-seeing), nor is it an “oracle” (in the computer-science definition of an “oracle”: viz a machine that can decide undecidable computational problems).
I do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia’s version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.
I think I agree, by and large, despite the length of this post.
Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice.
I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computational complexity might be arbitrarily large, even if you have complete knowledge of an algorithm and its input. I can not really see the relevance to my above post.
Finally, I am myself confused as to why you want two different decision theories (CDT and EDT) instead of two different models for the two different problems conflated into the single identifier Newcomb’s paradox. If you assume a perfect predictor, and thus full correlation between prediction and choice, then you have to make sure your model actually reflects that.
Let’s start out with a simple matrix, P/C/1/2 are shorthands for prediction, choice, one-box, two-box.
P1 C1: 1000
P1 C2: 1001
P2 C1: 0
P2 C2: 1
If the value of P is unknown, but independent of C: Dominance principle, C=2, entirely straightforward CDT.
If, however, the value of P is completely correlated with C, then the matrix above is misleading, P and C can not be different and are really only a single variable, which should be wrapped in a single identifier. The matrix you are actually applying CDT to is the following one:
(P&C)1: 1000
(P&C)2: 1
The best choice is (P&C)=1, again by straightforward CDT.
The only failure of CDT is that it gives different, correct solutions to different, problems with a properly defined correlation of prediction and choice. The only advantage of EDT is that it is easier to cheat in this information without noticing it—even when it would be incorrect to do so. It is entirely possible to have a situation where prediction and choice are correlated, but the decision theory is not allowed to know this and must assume that they are uncorrelated. The decision theory should give the wrong answer in this case.
Yes. I was confused, and perhaps added to the confusion.
If Omega cannot predict, TDT will two-box.