The less I know about chess, the more certainly I can predict the outcome if I play against a grandmaster.
Alright, let’s take this to the extreme. You’re playing an unknown game, all you know about it is that the grandmaster is an expert player and you don’t even know the rules nor the name of the game.
Task: Perfectly predict the outcome of you playing the grandmaster. That is, out of 3^^^3 runs of such a (first) game, you’d get each single game outcome right.
All the components of your reasoning process that have a chance to affect the outcome would need to be modelled, if only in some compressed yet equivalent form. For certain other predictions, such as “chance to spontaneously combust”, many attributes of e.g. your brain state would not need to be encompassed in a model for perfect predictability, but for the initial Newcomb’s question, involving a great many cognitive subsystems, a functionally equivalent model may be very hard to tell apart from the original human.
Congruency / isomorphism to the degree that there is a perfect correspondence with a question as involved as Newcomb’s would map to a correspondence for a vast range of topics involving the same cognitive functions.
As to your observation, it may be that there cases where for certain ranges of predictive precision, knowing less will increase your certainty. Yet, to predict perfectly you must model perfectly all components relevant to the outcome (if only in their maximally compressed form), and using the model to get the outcome from certain starting conditions equals computation.
Allowing for a margin of error, the simulation would indeed make do with lower fidelity. Yet the smaller the margin of error that is tolerable, the more the predictive model would have to resemble / be isomorphic to the functionality of all components involved in the outcome ((aside from some locally inversed dynamics as the one you pointed out).
Given an example such as “chess novice versus grandmaster”, a very rough model does indeed suffice until you get into extremely small tolerable epsilons (such as “no wrong prediction in 3^^^3 runs”).
However, for the present example, the proportion of one boxers versus two boxers doesn’t at all seem that lopsided.
Thus, to maintain a very high accuracy, the model would need to capture most of that which distinguishes between the two groups. I do grant that as the required accuracy is allowed to decrease to the low sigma range, the model probably would be very different from the actual human being, i.e. those parts that are isomorphic to that human’s thought process may not reflect more than a sliver of that person’s unique cognitive characteristics.
All in the details of the problem, as always. I may have overestimated Omega’s capabilities. (I imagine Omega chuckling in the background.)
You’re playing an unknown game, all you know about it is that the grandmaster is an expert player and you don’t even know the rules nor the name of the game.
If I also know that the game has (a) no luck component and (b) no mixed strategy Nash equilibrium (i.e. rock beats scissors beats paper beats rock), then I have enough information to make a within epsilon accurate prediction. If I don’t know those facts about the game, then you are right.
But your references to occurrences like spontaneous combustion is beside the point. The task is: assign likelihood of grandmaster winning or not winning. There are many possibilities that don’t correspond to either category, but predict-the-outcome doesn’t care about those possibilities. In the same way that I don’t advise my clients about the possibility that a mistrial will occur because the judge had a heart attack when I discuss possible litigation outcome.
Correct me if I’m wrong, but I don’t think that “The task is: assign likelihood of grandmaster winning or not winning.” captures what Omega is doing.
For each game you play, either the grandmaster will win or he will not (tertium non datur). Since it is not possible for Omega to be wrong, the only probabilities that are assigned are 0 or 1. No updating necessary.
Say you play 5 games against someone, and they will go W(in)L(oss)WLL, then Omega would predict just that; i.e. it would assign “Pr(game series goes “WLWLL) = 1”.
If Omega knows whether you will accept, it also knows whether you’ll have a heart attack—or some event—barring you from accepting, since that affects the outcome. It doesn’t need to label that event “heart attack”, but still its genesis needs to be accounted for in the model, since it affects the outcome.
I don’t want to dispute definitions, but I wouldn’t says Omega erred in predicting your choice if you had a heart attack before revealing whether you would one-box or two-box. As far as I’m concerned, Omega is not known for predicting who will and won’t be able to play, just what they will do if they play.
The less I know about chess, the more certainly I can predict the outcome if I play against a grandmaster.
The only “model” I need is the knowledge that the grandmaster is an expert player and I am not. Where in that am I “running a model”?
Alright, let’s take this to the extreme. You’re playing an unknown game, all you know about it is that the grandmaster is an expert player and you don’t even know the rules nor the name of the game.
Task: Perfectly predict the outcome of you playing the grandmaster. That is, out of 3^^^3 runs of such a (first) game, you’d get each single game outcome right.
All the components of your reasoning process that have a chance to affect the outcome would need to be modelled, if only in some compressed yet equivalent form. For certain other predictions, such as “chance to spontaneously combust”, many attributes of e.g. your brain state would not need to be encompassed in a model for perfect predictability, but for the initial Newcomb’s question, involving a great many cognitive subsystems, a functionally equivalent model may be very hard to tell apart from the original human.
Congruency / isomorphism to the degree that there is a perfect correspondence with a question as involved as Newcomb’s would map to a correspondence for a vast range of topics involving the same cognitive functions.
As to your observation, it may be that there cases where for certain ranges of predictive precision, knowing less will increase your certainty. Yet, to predict perfectly you must model perfectly all components relevant to the outcome (if only in their maximally compressed form), and using the model to get the outcome from certain starting conditions equals computation.
Where did 3^^^3 pop out of? Outside of mathematics, “always” never means “always”, and in the present context, Omega does not have to be perfect.
Allowing for a margin of error, the simulation would indeed make do with lower fidelity. Yet the smaller the margin of error that is tolerable, the more the predictive model would have to resemble / be isomorphic to the functionality of all components involved in the outcome ((aside from some locally inversed dynamics as the one you pointed out).
Given an example such as “chess novice versus grandmaster”, a very rough model does indeed suffice until you get into extremely small tolerable epsilons (such as “no wrong prediction in 3^^^3 runs”).
However, for the present example, the proportion of one boxers versus two boxers doesn’t at all seem that lopsided.
Thus, to maintain a very high accuracy, the model would need to capture most of that which distinguishes between the two groups. I do grant that as the required accuracy is allowed to decrease to the low sigma range, the model probably would be very different from the actual human being, i.e. those parts that are isomorphic to that human’s thought process may not reflect more than a sliver of that person’s unique cognitive characteristics.
All in the details of the problem, as always. I may have overestimated Omega’s capabilities. (I imagine Omega chuckling in the background.)
If I also know that the game has (a) no luck component and (b) no mixed strategy Nash equilibrium (i.e. rock beats scissors beats paper beats rock), then I have enough information to make a within epsilon accurate prediction. If I don’t know those facts about the game, then you are right.
But your references to occurrences like spontaneous combustion is beside the point. The task is: assign likelihood of grandmaster winning or not winning. There are many possibilities that don’t correspond to either category, but predict-the-outcome doesn’t care about those possibilities. In the same way that I don’t advise my clients about the possibility that a mistrial will occur because the judge had a heart attack when I discuss possible litigation outcome.
Correct me if I’m wrong, but I don’t think that “The task is: assign likelihood of grandmaster winning or not winning.” captures what Omega is doing.
For each game you play, either the grandmaster will win or he will not (tertium non datur). Since it is not possible for Omega to be wrong, the only probabilities that are assigned are 0 or 1. No updating necessary.
Say you play 5 games against someone, and they will go W(in)L(oss)WLL, then Omega would predict just that; i.e. it would assign “Pr(game series goes “WLWLL) = 1”.
If Omega knows whether you will accept, it also knows whether you’ll have a heart attack—or some event—barring you from accepting, since that affects the outcome. It doesn’t need to label that event “heart attack”, but still its genesis needs to be accounted for in the model, since it affects the outcome.
I don’t want to dispute definitions, but I wouldn’t says Omega erred in predicting your choice if you had a heart attack before revealing whether you would one-box or two-box. As far as I’m concerned, Omega is not known for predicting who will and won’t be able to play, just what they will do if they play.