Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb’s problem.
Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer.
No, it does not, that’s what I was trying to explain. It’s what I’ve been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don’t you? In order for CDT to use the correlation, you need a causal arrow from C to P—that amounts to backward causation, which we don’t want. Simple as that.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make.
I’m not sure what the meaning of this is. Of course the decision algorithm is fixed before it’s run, and therefore its output is predetermined. It just doesn’t know its own output before it has computed it. And I’m not trying to figure out what the agent should do—the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using.
PS: The downvote on your post above wasn’t from me.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb’s problem we have been discussing—it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.
It is not the decision theory’s responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order.
If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb’s problem where A and C are fully correlated.
If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.
You don’t promote C to the action node, it is the action node. That’s the way the decision problem is specified: do you one-box or two-box? If you don’t accept that, then you’re talking about a different decision problem. But in Newcomb’s problem, the algorithm is trying to decide that. It’s not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend—as a means of reaching a decision about C—that it’s deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
In AI, you do not discuss it in terms of anthropomorphic “trying to decide”. For example, there’s a “Model based utility based agent” . Computing what the world will be like if a decision is made in a specific way is part of the model of the world, i.e. part of the laws of physics as the agent knows them. If this physics implements the predictor at all, model-based utility-based agent will one-box.
I don’t see at all what’s wrong or confusing about saying that an agent is trying to decide something; or even, for that matter, that an algorithm is trying to decide something, even though that’s not a precise way of speaking.
More to the point, though, doesn’t what you describe fit EDT and CDT both, with each theory having a different way of computing “what the world will be like if the decision is made in a specific way”?
Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.
If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
In a way, albeit it does not resemble how EDT tends to be presented.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
On the CDT, formally speaking, what do you think P(A if B) even is?
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
So then how does it not fall prey to the problems of EDT?
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
That is not the problem at all, it’s perfectly well-defined.
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one.
Yes. That’s basically the definition of CDT. That’s also why CDT is no good. You can quibble about the word but in “the literature”, ‘CDT’ means just that.
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb’s problem.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
No, it does not, that’s what I was trying to explain. It’s what I’ve been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don’t you? In order for CDT to use the correlation, you need a causal arrow from C to P—that amounts to backward causation, which we don’t want. Simple as that.
I’m not sure what the meaning of this is. Of course the decision algorithm is fixed before it’s run, and therefore its output is predetermined. It just doesn’t know its own output before it has computed it. And I’m not trying to figure out what the agent should do—the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using.
PS: The downvote on your post above wasn’t from me.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb’s problem we have been discussing—it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.
It is not the decision theory’s responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order.
If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb’s problem where A and C are fully correlated.
If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.
You don’t promote C to the action node, it is the action node. That’s the way the decision problem is specified: do you one-box or two-box? If you don’t accept that, then you’re talking about a different decision problem. But in Newcomb’s problem, the algorithm is trying to decide that. It’s not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend—as a means of reaching a decision about C—that it’s deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
In AI, you do not discuss it in terms of anthropomorphic “trying to decide”. For example, there’s a “Model based utility based agent” . Computing what the world will be like if a decision is made in a specific way is part of the model of the world, i.e. part of the laws of physics as the agent knows them. If this physics implements the predictor at all, model-based utility-based agent will one-box.
I don’t see at all what’s wrong or confusing about saying that an agent is trying to decide something; or even, for that matter, that an algorithm is trying to decide something, even though that’s not a precise way of speaking.
More to the point, though, doesn’t what you describe fit EDT and CDT both, with each theory having a different way of computing “what the world will be like if the decision is made in a specific way”?
Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.
If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
So it’s basically EDT, where you just conditionalize on the action being performed?
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.
Yes. That’s basically the definition of CDT. That’s also why CDT is no good. You can quibble about the word but in “the literature”, ‘CDT’ means just that.
This only shows that the model is no good, because the model does not respect the assumptions of the decision theory.