We believe in the forecasting power, but we are uncertain as to what mechanism that forecasting power is taking advantage of to predict the world.
analogously, I know Omega will defeat me at Chess, but I do not know which opening move he will play.
In this case, the TDT decision depends critically on which causal mechanism underlies that forecasting power. Since we do not know, we will have to apply some principles for decision under uncertainty, which will depend on the payoffs, and on other features of the situation. The EDT decision does not. My intuitions and, I believe, the intuitions of many other commenters here, are much closer to the TDT approach than the EDT approach. Thus your examples are not very helpful to us—they lump things we would rather split, because our decisions in the sort of situation you described would depend in a fine-grained way on what causal explanations we found most plausible.
Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?
I agree that it is challenging to assign forecasting power to a study, as we’re uncertain about lots of background conditions. There is forecasting power to the degree that the set A of all variables involved with previous subjects allow for predictions about the set A’ of variables involved in our case. Though when we deal with Omega who is defined to make true predictions, then we need to take this forecasting power into account, no matter what the underlying mechanism is. I mean, what if Omega in Newcomb’s Problem was defined to make true predictions and you don’t know anything about the underlying mechanism? Wouldn’t you one-box after all?
Let’s call Omega’s prediction P and the future event F. Once Omega’s prediction are defined to be true, we can denote the following logical equivalences:
P(1 boxing) <--> F(1 boxing) and P(2 boxing) <--> P(2 boxing). Given this conditions, it impossible to 2-box when box B is filled with a million dollars (you could also formulate it in terms of probabilities where such an impossible event would have the probability of 0).
I admit that we have to be cautious when we deal with instances that are not defined to make true predictions.
Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?
My answer depends on the specific set-up. What exactly do we mean with “It is well-known”? It doesn’t seem to be a study that would describe the set A of all factors involved which we then could use to derive A’ that applied to our own case. Unless we define “It is well-known” as a instance that allows for predictions in the direction A --> A’, I see little reason to assume a forecasting power. Without forecasting power, screening off applies and it would be foolish to train the distinctive manner of speaking.
If we specified the game in a way that there is forecasting power at work (or at least we had reason to believe so), depending on your definition of choice (I prefer one that is devoid of free will) you can or cannot choose the gene. These kind of thoughts are listed here or in the section “Newcomb’s Problem’s Problem of Free Will” in the post.
Suppose I am deciding now whether to one-box or two-box on the problem. That’s a reasonable supposition, because I am deciding now whether to one-box or two-box. There are a couple possibilities for what Omega could be doing:
Omega observes my brain, and predicts what I am going to do accurately.
Omega makes an inaccurate prediction, probabilistically independent from my behavior.
Omega modifies my brain to a being it knows will one-box or will two-box, then makes the corresponding prediction.
If Omega uses predictive methods that aren’t 100% effective, I’ll treat it as combination of case 1 and 2. If Omega uses very powerful mind-influencing technology that isn’t 100% effective, I’ll treat it as a combination of case 2 and 3.
In case 1 , I should decide now to one-box. In case 2, I should decide now to two-box. In case 3, it doesn’t matter what I decide now.
If Omega is 100% accurate, I know for certain I am in case 1 or case 3. So I should definitely one-box. This is true even if case 1 is vanishingly unlikely.
If Omega is even 99.9% accurate, then I am in some combination of case 1, case 2, or case 3. Whether I should decide now to one-box or two-box depends on the relative probability of case 1 and case 2, ignoring case 3. So even if Omega is very accurate, ensuring that the probability of case 2 is small, if the probability of case 1 is even smaller, I should decide now to two-box.
I mean, I am describing a very specific forecasting technique that you can use to make forecasts right now. Perhaps a more precise version is, you observer children in one of two different preschools, and observe which school they are in. You observe that almost 100% of the children in one preschool end up richer than the children in the other preschool. You are then able to forecast that future children observed in preschool A will grow up to be rich, and future children observed in preschool B will grow up to be poor. You then have a child. Should you bring them to preschool A? (Here I don’t mean have them attend the school. They can simply go to the building at whatever time of day the study was conducted, then leave. That is sufficient to make highly accurate predictions, after all!)
I don’t really know what you mean by “the set A of all factors involved”
We believe in the forecasting power, but we are uncertain as to what mechanism that forecasting power is taking advantage of to predict the world.
analogously, I know Omega will defeat me at Chess, but I do not know which opening move he will play.
In this case, the TDT decision depends critically on which causal mechanism underlies that forecasting power. Since we do not know, we will have to apply some principles for decision under uncertainty, which will depend on the payoffs, and on other features of the situation. The EDT decision does not. My intuitions and, I believe, the intuitions of many other commenters here, are much closer to the TDT approach than the EDT approach. Thus your examples are not very helpful to us—they lump things we would rather split, because our decisions in the sort of situation you described would depend in a fine-grained way on what causal explanations we found most plausible.
Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?
I agree that it is challenging to assign forecasting power to a study, as we’re uncertain about lots of background conditions. There is forecasting power to the degree that the set A of all variables involved with previous subjects allow for predictions about the set A’ of variables involved in our case. Though when we deal with Omega who is defined to make true predictions, then we need to take this forecasting power into account, no matter what the underlying mechanism is. I mean, what if Omega in Newcomb’s Problem was defined to make true predictions and you don’t know anything about the underlying mechanism? Wouldn’t you one-box after all? Let’s call Omega’s prediction P and the future event F. Once Omega’s prediction are defined to be true, we can denote the following logical equivalences: P(1 boxing) <--> F(1 boxing) and P(2 boxing) <--> P(2 boxing). Given this conditions, it impossible to 2-box when box B is filled with a million dollars (you could also formulate it in terms of probabilities where such an impossible event would have the probability of 0). I admit that we have to be cautious when we deal with instances that are not defined to make true predictions.
My answer depends on the specific set-up. What exactly do we mean with “It is well-known”? It doesn’t seem to be a study that would describe the set A of all factors involved which we then could use to derive A’ that applied to our own case. Unless we define “It is well-known” as a instance that allows for predictions in the direction A --> A’, I see little reason to assume a forecasting power. Without forecasting power, screening off applies and it would be foolish to train the distinctive manner of speaking. If we specified the game in a way that there is forecasting power at work (or at least we had reason to believe so), depending on your definition of choice (I prefer one that is devoid of free will) you can or cannot choose the gene. These kind of thoughts are listed here or in the section “Newcomb’s Problem’s Problem of Free Will” in the post.
Suppose I am deciding now whether to one-box or two-box on the problem. That’s a reasonable supposition, because I am deciding now whether to one-box or two-box. There are a couple possibilities for what Omega could be doing:
Omega observes my brain, and predicts what I am going to do accurately.
Omega makes an inaccurate prediction, probabilistically independent from my behavior.
Omega modifies my brain to a being it knows will one-box or will two-box, then makes the corresponding prediction.
If Omega uses predictive methods that aren’t 100% effective, I’ll treat it as combination of case 1 and 2. If Omega uses very powerful mind-influencing technology that isn’t 100% effective, I’ll treat it as a combination of case 2 and 3.
In case 1 , I should decide now to one-box. In case 2, I should decide now to two-box. In case 3, it doesn’t matter what I decide now.
If Omega is 100% accurate, I know for certain I am in case 1 or case 3. So I should definitely one-box. This is true even if case 1 is vanishingly unlikely.
If Omega is even 99.9% accurate, then I am in some combination of case 1, case 2, or case 3. Whether I should decide now to one-box or two-box depends on the relative probability of case 1 and case 2, ignoring case 3. So even if Omega is very accurate, ensuring that the probability of case 2 is small, if the probability of case 1 is even smaller, I should decide now to two-box.
I mean, I am describing a very specific forecasting technique that you can use to make forecasts right now. Perhaps a more precise version is, you observer children in one of two different preschools, and observe which school they are in. You observe that almost 100% of the children in one preschool end up richer than the children in the other preschool. You are then able to forecast that future children observed in preschool A will grow up to be rich, and future children observed in preschool B will grow up to be poor. You then have a child. Should you bring them to preschool A? (Here I don’t mean have them attend the school. They can simply go to the building at whatever time of day the study was conducted, then leave. That is sufficient to make highly accurate predictions, after all!)
I don’t really know what you mean by “the set A of all factors involved”