Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
In a way, albeit it does not resemble how EDT tends to be presented.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
On the CDT, formally speaking, what do you think P(A if B) even is?
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
So then how does it not fall prey to the problems of EDT?
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
That is not the problem at all, it’s perfectly well-defined.
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.
Causality is under specified, whereas the laws of physics are fairly well defined, especially for a hypothetical where you can e.g. assume deterministic Newtonian mechanics for sake of simplifying the analysis. You have the hypothetical: sequence of commands to the robotic manipulator. You process the laws of physics to conclude that this sequence of commands picks up one box of unknown weight. You need to determine weight of the box to see if this sequence of commands will lead to the robot tipping over. Now, you see, to determine that sort of thing, models of physical world tend to walk backwards and forwards in time: for example if your window shatters and a rock flies in, you can conclude that there’s a rock thrower in the direction that the rock came from, and you do it by walking backwards in time.
So it’s basically EDT, where you just conditionalize on the action being performed?
In a way, albeit it does not resemble how EDT tends to be presented.
On the CDT, formally speaking, what do you think P(A if B) even is? Keep in mind that given some deterministic, computable laws of physics, given that you ultimately decide an option B, in the hypothetical that you decide an option C where C!=B , it will be provable that C=B , i.e. you have a contradiction in the hypothetical.
So then how does it not fall prey to the problems of EDT? It depends on the precise formalization of “computing what the world will be like if the action is taken, according to the laws of physics”, of course, but I’m having trouble imagining how that would not end up basically equivalent to EDT.
That is not the problem at all, it’s perfectly well-defined. I think if anything, the question would be what CDT’s P(A if B) is intuitively.
What are those, exactly? The “smoking lesion”? It specifies that output of decision theory correlates with lesion. Who knows how, but for it to actually correlate with decision of that decision theory other than via the inputs to decision theory, it got to be our good old friend Omega doing some intelligent design and adding or removing that lesion. (And if it does through the inputs, then it’ll smoke).
Given world state A which evolves into world state B (computable, deterministic universe), the hypothetical “what if world state A evolved into C where C!=B” will lead, among other absurdities, to a proof that B=C contradicting that B!=C (of course you can ensure that this particular proof won’t be reached with various silly hacks but you’re still making false assumptions and arriving at false conclusions). Maybe what you call ‘causal’ decision theory should be called ‘acausal’ because it in fact ignores causes of the decision, and goes as far as to break down it’s world model to do so. If you don’t do contradictory assumptions, then you have a world state A that evolves into world state B, and world state A’ that evolves into world state C, and in the hypothetical that the state becomes C!=B, the prior state got to be A’!=A . Yeah, it looks weird to westerners with their philosophy of free will and your decisions having the potential to send the same world down a different path. I am guessing it is much much less problematic if you were more culturally exposed to determinism/fatalism. This may be a very interesting topic, within comparative anthropology.
The main distinction between philosophy and mathematics (or philosophy done by mathematicians) seem to be that in the latter, if you get yourself a set of assumptions leading to contradictory conclusions (example: in Newcomb’s on one hand it can be concluded that agents which 1 box walk out with more money, on the other hand , agents that choose to two-box get strictly more money than those that 1-box), it is generally concluded that something is wrong with the assumptions, rather than argued which of the conclusions is truly correct given the assumptions.