Although I don’t believe it to be impossible that a gene causes you to think in specific ways, in the setting of the game such a mechanism is not required.
It is required. If Omega is making true statements, they are (leaving aside those cases where someone is made aware of the prediction before choosing) true independently of Omega making them. That means that everyone with gene A makes choice A and everyone with gene B makes choice B. This strong entanglement implies the existence of some sort of causal connection, whether or not Omega exists.
More generally, I think that every one of those problems would be made clear by exhibiting the causal relationships that are being presumed to hold. Here is my attempt.
For the School Mark problem, the causal diagram I obtain from the description is one of these:
pupil's character ----> teacher's prediction ----> final mark
|
|
V
studying ----> exam performance
or
pupil's character ----> teacher's prediction
|
|
V
studying ----> exam performance ----> final mark
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn’t bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.
For Newcomb we have:
person's qualities --> Omega's prediction --> contents of boxes
| |
| |
V V
person's decision --------------------------> payoff
(ETA: the second down arrow should go from “contents of boxes” to “payoff”. Apparently Markdown’s code mode isn’t as code-modey as I expected.)
The hypotheses prevent us from performing surgery on this graph to model do(person’s decision). The do() operator requires deleting all in-edges to the node operated on, making it causally independent of all of its non-descendants in the graph. The hypotheses of Newcomb stipulate that this cannot be done: every consideration you could possibly employ in making a decision is assumed to be already present in the personal qualities that Omega’s prediction is based on.
A-B:
Unknown factors ---> Gene ---> Lifespan
|
|
V
Choice
or:
Gene ---> Lifespan
|
|
V
Choice
or both together.
Here, it may be unfortunate to discover oneself making choice B, but by the hypotheses of this problem, you have no choice. As with Newcomb, causal surgery is excluded by the problem. To the extent that your choice is causally independent of the given arrow, to that extent you can ignore lifespan in making your choice—indeed, it is to that extent that you have a choice.
For Solomon’s Problem (which, despite the great length of the article, you didn’t set out) the diagram is:
charisma ----> overthrow
|
|
V
commit adultery
This implies that while it may be unfortunate for Solomon to discover adulterous desires, he will not make himself worse off by acting on them. This differs from A-B because we are given some causal mechanisms, and know that they are not deterministic: an uncharismatic leader still has a choice to make about adultery, and to the extent that it is causally independent of the lack of charisma, it can be made, without regard to the likelihood of overthrow.
(ETA: the arrow from “chew gun” to “throat abcesses” didn’t come out very well.)
in which chewing gum is protective against throat abscesses, and positively to be recommended.
Newcomb’s Soda:
soda assignment ---> $1M
|
|
V
choice of ice cream ---> $1K
Here, your inclination to choose a flavour of ice-cream is informative about the $1M prize, but the causal mechanism is limited to experiencing a preference. If you would prefer $1K to a chocolate ice-cream then you can safely choose vanilla.
Finally, here’s another decision problem I thought of. Unlike all of the above, it requires no sci-fi hypotheses, real-world examples exist everywhere, and correctly solving them is an important practical skill.
I want to catch a train in half an hour. I judge that this is enough time to get to the station, buy a ticket, and board the train. Based on a large number of similar experiences in the past, I can confidently predict that I will catch the train. Since I know I will catch the train, should I actually do anything to catch the train?
The general form of this problem can be applied to many others. I predict that I’m going to ace an upcoming exam. Should I study? I predict I’ll win an upcoming tennis match. Should I train for it? I predict I’ll complete a piece of contract work on time. Should I work on it? I predict that I will post this. Should I click the “Comment” button?
For the School Mark problem, the causal diagram I obtain from the description is one of these:
diagram
or
diagram
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn’t > bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the
decision problem I describe at the end of this comment.
I think it’s clear that Pallas had the first diagram in mind, and his point was exactly that the rational thing to do is to study despite the fact that the mark has already been written down. I agree with this.
Think of the following three scenarios:
A: No prediction is made and the final grade is determined by the exam performance.
B: A perfect prediction is made and the final grade is determined by the exam performance.
C: A perfect prediction is made and the final grade is based on the prediction.
Clearly, in scenario A the student should study. You are saying that in scenario C, the rational thing to do is not studying. Therefore, you think that the rational decision differs between either A and B, or between B and C. Going from A to B, why should the existence of someone who predicts your decision (without you knowing the prediction!) affect which decision the rational one is? That the final mark is the same in B and C follows from the very definition of a “perfect prediction”. Since each possible decision gives the same final mark in B and C, why should the rational decision differ?
In all three scenarios, the mapping from the set of possible decisions to the set of possible outcomes is identical—and this mapping is arguably all you need to know in order to make the correct decision. ETA: “Possible” here means “subjectively seen as possible”.
By deciding whether or not to learn, you can, from your subjective point of view, “choose” wheter you were determined to learn or not.
My first diagram is scenario C and my second is scenario B. In the first diagram there is no (ETA: causal) dependence of the final mark on exam performance. I think pallas’ intended scenario was more likely to be B: the mark does (ETA: causally) depend on exam performance and has been predicted. Since in B the mark depends on final performance it is necessary to study and take the exam.
In the real world, where teachers do not possess Omega’s magic powers, teachers may very well be able to predict pretty much how their students will do. For that matter, the students themselves can predict how they will do, which transforms the problem into the very ordinary, non-magical one I gave at the end of my comment. If you know how well you will do on the exam, and want to do well on it, should you (i.e. is it the correct decision to) put in the work? Or for another example of topical interest, consider the effects of genes on character.
Unless you draw out the causal diagrams, Omega is just magic: an imaginary phenomenon with no moving parts. As has been observed by someone before on LessWrong, any decision theory can be defeated by suitably crafted magic: Omega fills the boxes, or whatever, in the opposite way to whatever your decision theory will conclude. Problems of that sort offer little insight into decision theory.
It is required. If Omega is making true statements, they are (leaving aside those cases where someone is made aware of the prediction before choosing) true independently of Omega making them. That means that everyone with gene A makes choice A and everyone with gene B makes choice B. This strong entanglement implies the existence of some sort of causal connection, whether or not Omega exists.
More generally, I think that every one of those problems would be made clear by exhibiting the causal relationships that are being presumed to hold. Here is my attempt.
For the School Mark problem, the causal diagram I obtain from the description is one of these:
or
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn’t bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.
For Newcomb we have:
(ETA: the second down arrow should go from “contents of boxes” to “payoff”. Apparently Markdown’s code mode isn’t as code-modey as I expected.)
The hypotheses prevent us from performing surgery on this graph to model do(person’s decision). The do() operator requires deleting all in-edges to the node operated on, making it causally independent of all of its non-descendants in the graph. The hypotheses of Newcomb stipulate that this cannot be done: every consideration you could possibly employ in making a decision is assumed to be already present in the personal qualities that Omega’s prediction is based on.
A-B:
or:
or both together.
Here, it may be unfortunate to discover oneself making choice B, but by the hypotheses of this problem, you have no choice. As with Newcomb, causal surgery is excluded by the problem. To the extent that your choice is causally independent of the given arrow, to that extent you can ignore lifespan in making your choice—indeed, it is to that extent that you have a choice.
For Solomon’s Problem (which, despite the great length of the article, you didn’t set out) the diagram is:
This implies that while it may be unfortunate for Solomon to discover adulterous desires, he will not make himself worse off by acting on them. This differs from A-B because we are given some causal mechanisms, and know that they are not deterministic: an uncharismatic leader still has a choice to make about adultery, and to the extent that it is causally independent of the lack of charisma, it can be made, without regard to the likelihood of overthrow.
Similarly CGTA:
and the variant:
(ETA: the arrow from “chew gun” to “throat abcesses” didn’t come out very well.)
in which chewing gum is protective against throat abscesses, and positively to be recommended.
Newcomb’s Soda:
Here, your inclination to choose a flavour of ice-cream is informative about the $1M prize, but the causal mechanism is limited to experiencing a preference. If you would prefer $1K to a chocolate ice-cream then you can safely choose vanilla.
Finally, here’s another decision problem I thought of. Unlike all of the above, it requires no sci-fi hypotheses, real-world examples exist everywhere, and correctly solving them is an important practical skill.
I want to catch a train in half an hour. I judge that this is enough time to get to the station, buy a ticket, and board the train. Based on a large number of similar experiences in the past, I can confidently predict that I will catch the train. Since I know I will catch the train, should I actually do anything to catch the train?
The general form of this problem can be applied to many others. I predict that I’m going to ace an upcoming exam. Should I study? I predict I’ll win an upcoming tennis match. Should I train for it? I predict I’ll complete a piece of contract work on time. Should I work on it? I predict that I will post this. Should I click the “Comment” button?
I think it’s clear that Pallas had the first diagram in mind, and his point was exactly that the rational thing to do is to study despite the fact that the mark has already been written down. I agree with this.
Think of the following three scenarios:
A: No prediction is made and the final grade is determined by the exam performance.
B: A perfect prediction is made and the final grade is determined by the exam performance.
C: A perfect prediction is made and the final grade is based on the prediction.
Clearly, in scenario A the student should study. You are saying that in scenario C, the rational thing to do is not studying. Therefore, you think that the rational decision differs between either A and B, or between B and C. Going from A to B, why should the existence of someone who predicts your decision (without you knowing the prediction!) affect which decision the rational one is? That the final mark is the same in B and C follows from the very definition of a “perfect prediction”. Since each possible decision gives the same final mark in B and C, why should the rational decision differ?
In all three scenarios, the mapping from the set of possible decisions to the set of possible outcomes is identical—and this mapping is arguably all you need to know in order to make the correct decision. ETA: “Possible” here means “subjectively seen as possible”.
By deciding whether or not to learn, you can, from your subjective point of view, “choose” wheter you were determined to learn or not.
My first diagram is scenario C and my second is scenario B. In the first diagram there is no (ETA: causal) dependence of the final mark on exam performance. I think pallas’ intended scenario was more likely to be B: the mark does (ETA: causally) depend on exam performance and has been predicted. Since in B the mark depends on final performance it is necessary to study and take the exam.
In the real world, where teachers do not possess Omega’s magic powers, teachers may very well be able to predict pretty much how their students will do. For that matter, the students themselves can predict how they will do, which transforms the problem into the very ordinary, non-magical one I gave at the end of my comment. If you know how well you will do on the exam, and want to do well on it, should you (i.e. is it the correct decision to) put in the work? Or for another example of topical interest, consider the effects of genes on character.
Unless you draw out the causal diagrams, Omega is just magic: an imaginary phenomenon with no moving parts. As has been observed by someone before on LessWrong, any decision theory can be defeated by suitably crafted magic: Omega fills the boxes, or whatever, in the opposite way to whatever your decision theory will conclude. Problems of that sort offer little insight into decision theory.