Underlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic.
If you do process constraints fully then the action to take one box requires pre-copy state of “you” that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. (“you” is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise).
edit: came up with an example. Suppose ‘you’ is a robotics controller, you know you’re made of various electrical components, you’re connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn’t supply enough current. If constraints contradict known reality then you know you can’t do this action. Suppose there’s a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven’t been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn’t need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.
I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb’s problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation.
I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged.
You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either.
I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.
I’d be the first to agree on terminology here. I’m not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word ‘causal’ from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn’t capture very well how physics work.
When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it’s field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future.
So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box.
(And with regards to the “smoking lesion” problem, smoking lesion postulates a blatant logical contradiction—it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb’s leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb’s problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
The problems with EDT are quite ordinary… its looking for good news, and also, it is kind of under-specified (e.g. some argue it’d two-box in Newcomb’s after learning physics). A decision theory can not be disqualified for giving ‘wrong’ answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).
Underlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic.
If you do process constraints fully then the action to take one box requires pre-copy state of “you” that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. (“you” is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise).
edit: came up with an example. Suppose ‘you’ is a robotics controller, you know you’re made of various electrical components, you’re connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn’t supply enough current. If constraints contradict known reality then you know you can’t do this action. Suppose there’s a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven’t been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn’t need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.
I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb’s problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation.
I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged.
You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either.
I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.
I’d be the first to agree on terminology here. I’m not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word ‘causal’ from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn’t capture very well how physics work.
When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it’s field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future.
So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box.
(And with regards to the “smoking lesion” problem, smoking lesion postulates a blatant logical contradiction—it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
Excellent.
I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb’s leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb’s problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
The problems with EDT are quite ordinary… its looking for good news, and also, it is kind of under-specified (e.g. some argue it’d two-box in Newcomb’s after learning physics). A decision theory can not be disqualified for giving ‘wrong’ answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).