I still haven’t found a readable meta-overview of causation. What I would love to be able to read is a 3-10 pages article that answers these questions: what is causation, why our intuitive feeling that “A causes B” is straightforward to understand is naive (some examples), why nevertheless “A causes B” is fundamental and should be studied, what disciplines are interested in answering that question, what are the main approaches (short descriptions with simple lucid examples), which of them are orthogonal/in conflict/cooperate with each other, example of how a rigorous definition of causality is useful in some other problem, major challenges in the field.
Before I’m able to digest such a summary (or ultimately construct it in my own head from other longer sources if I’m unable to find it), I remain confused by just about every theoretical discussion of causation—without at least a vague understanding of what’s known, what’s unknown, what’s important and what’s mainstream everything sounds a little sectarian.
1) Do you understand the standard story about the thermodynamic arrow of time? Wikipedia:
Physically speaking, the perception of cause and effect in the dropped cup example is a phenomenon of the thermodynamic arrow of time, a consequence of the Second law of thermodynamics. Controlling the future, or causing something to happen, creates correlations between the doer and the effect, and these can only be created as we move forwards in time, not backwards.
2) Do you understand the standard story about the smoking/tar/cancer example in Pearl’s theory of causality? If not, here’s a good explanation.
For anything more advanced than that, Ilya is probably your best bet :-)
I sense there may be a contradiction between a decision theory that aims to be timeless and the mandate to ignore sunk costs because they’re in the past. But I fear I may be terribly misunderstanding both concepts.
Yes, that might be a genuine contradiction, and ignoring sunk costs might be wrong. Can you try to come up with a simple decision problem that puts the two into conflict?
I don’t see this contradiction. In a timeless decision theory, the diagram and parameters are not the same when X is in control of resource A (at “time” T) and when X is not in control of resource A (at time T+1).
The “timeless” of the decision theory doesn’t mean that the decision theory ignores the effects of time and past decisions. Rather, it refers to a more technical (and definitely more confusing) abstraction about predictions and kind of subtly hints at a reference to the (also technical) concept of symmetry in physics.
Mainly, the point is to deflect naive reasoning in problems involving predictions or similar “time-defying” situations. The classic example is newcomblike problems, specifically Newcomb’s Problem. In these situations, acting as if your current decision were a partial cause of the past prediction, and thus of whether or not Omega/The Predictor put a reward in a box, leads to better subjective chances of finding a reward in said box. The “timeless” aspect here is that a phenomenon (the decision you make) is almost looks like it’s a cause of another (the prediction of your decision) that happened “in the past”.
In fact, however, they have a common prior cause: the state of the universe and, particularly, of the brain / processor / information of the entity making the decision, prior to the prediction. Treating it as, and calling it, “timeless” helps avoid issues where this will turn into a debate about free will and determinism.
In newcomblike problems, an event B happenes where Omega predicts whether A1 or A2 will happen, based on whether C1 or C2 is true (two possible states of the brain of the player, or outcomes of a simulation). Then, either A1 or A2 happens, based on whether C1 or C2 is true, as predicted by Omega. Since the player doesn’t have the same means as Omega to know C or B, he must decide as if A caused C which caused B, which could be roughly described as a decision causing the result of a prediction of this decision in the past.
So, back to the timeless VS sunk costs “contradiction”: In a sunk costs situation, there is no Omega, there is no C, there is no prediction (B). At the moment of decision, the state of the game in abstract is something more like: “Decision A caused Resource B to go from 5 to 3, 1B can be paid to obtain 2 utilons by making decision C1, 2B can be paid to obtain 5 utilons by making decision C2″. There’s no predictions or fancy delusions of affecting events that caused the current state. A caused B(5->3) caused (NOW) caused C. C has no causal effect on (NOW), which has no causal effect on B, which has no causal effect on A. No amount of removing the timestamps and pretending that your future decision will change how it was predicted is going to change the (NOW) state.
I could go on at length and depth, but let’s see how much of this makes sense first (i.e. that you understand and/or that I mis-explained).
I am happy to contribute explanations of causal matters people are confused about.
I still haven’t found a readable meta-overview of causation. What I would love to be able to read is a 3-10 pages article that answers these questions: what is causation, why our intuitive feeling that “A causes B” is straightforward to understand is naive (some examples), why nevertheless “A causes B” is fundamental and should be studied, what disciplines are interested in answering that question, what are the main approaches (short descriptions with simple lucid examples), which of them are orthogonal/in conflict/cooperate with each other, example of how a rigorous definition of causality is useful in some other problem, major challenges in the field.
Before I’m able to digest such a summary (or ultimately construct it in my own head from other longer sources if I’m unable to find it), I remain confused by just about every theoretical discussion of causation—without at least a vague understanding of what’s known, what’s unknown, what’s important and what’s mainstream everything sounds a little sectarian.
1) Do you understand the standard story about the thermodynamic arrow of time? Wikipedia:
2) Do you understand the standard story about the smoking/tar/cancer example in Pearl’s theory of causality? If not, here’s a good explanation.
For anything more advanced than that, Ilya is probably your best bet :-)
1) yes 2) no, and I’ll read through Nielsen’s post, thanks. I’ve been postponing the task of actually reading Pearl’s book.
I find the Socratic approach useful for bridging gaps, do you?
I sense there may be a contradiction between a decision theory that aims to be timeless and the mandate to ignore sunk costs because they’re in the past. But I fear I may be terribly misunderstanding both concepts.
Yes, that might be a genuine contradiction, and ignoring sunk costs might be wrong. Can you try to come up with a simple decision problem that puts the two into conflict?
I don’t see this contradiction. In a timeless decision theory, the diagram and parameters are not the same when X is in control of resource A (at “time” T) and when X is not in control of resource A (at time T+1).
The “timeless” of the decision theory doesn’t mean that the decision theory ignores the effects of time and past decisions. Rather, it refers to a more technical (and definitely more confusing) abstraction about predictions and kind of subtly hints at a reference to the (also technical) concept of symmetry in physics.
Mainly, the point is to deflect naive reasoning in problems involving predictions or similar “time-defying” situations. The classic example is newcomblike problems, specifically Newcomb’s Problem. In these situations, acting as if your current decision were a partial cause of the past prediction, and thus of whether or not Omega/The Predictor put a reward in a box, leads to better subjective chances of finding a reward in said box. The “timeless” aspect here is that a phenomenon (the decision you make) is almost looks like it’s a cause of another (the prediction of your decision) that happened “in the past”.
In fact, however, they have a common prior cause: the state of the universe and, particularly, of the brain / processor / information of the entity making the decision, prior to the prediction. Treating it as, and calling it, “timeless” helps avoid issues where this will turn into a debate about free will and determinism.
In newcomblike problems, an event B happenes where Omega predicts whether A1 or A2 will happen, based on whether C1 or C2 is true (two possible states of the brain of the player, or outcomes of a simulation). Then, either A1 or A2 happens, based on whether C1 or C2 is true, as predicted by Omega. Since the player doesn’t have the same means as Omega to know C or B, he must decide as if A caused C which caused B, which could be roughly described as a decision causing the result of a prediction of this decision in the past.
So, back to the timeless VS sunk costs “contradiction”: In a sunk costs situation, there is no Omega, there is no C, there is no prediction (B). At the moment of decision, the state of the game in abstract is something more like: “Decision A caused Resource B to go from 5 to 3, 1B can be paid to obtain 2 utilons by making decision C1, 2B can be paid to obtain 5 utilons by making decision C2″. There’s no predictions or fancy delusions of affecting events that caused the current state. A caused B(5->3) caused (NOW) caused C. C has no causal effect on (NOW), which has no causal effect on B, which has no causal effect on A. No amount of removing the timestamps and pretending that your future decision will change how it was predicted is going to change the (NOW) state.
I could go on at length and depth, but let’s see how much of this makes sense first (i.e. that you understand and/or that I mis-explained).