I have written a critique of the position that one boxing wins on Newcomb’s problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here
I don’t understand what the part about “fallible” and “infallible” agents is supposed to mean. If there is an “infallible” agent that makes the correct prediction 60% of the time and a “fallible” agent that makes the correct prediction 60% of the time, in what way should one anticipate them to behave differently?
It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.
As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.
The problem here is that Newcomb’s problem doesn’t actually state whether you are dealing with a smart predictor or a dumb predictor. It doesn’t state whether Omega is sufficiently smart. It doesn’t state whether the initial conditions that are causally connected to your choice are also causally connected to the prediction Omega makes. So without smuggled in assumptions there is insufficient information to determine whether to one box or two box. You might as well flip a coin.
I’ve seen statements of Newcomb-like problems saying things like “Omega gets it right 90% of the time”. In that case it seems like it should matter whether it’s because of cosmic rays that affect all predictions equally, or whether it’s because he can only usefully predict the 90% of people who are easiest to predict, in which case if I’m not mistaken you can two-box if you’re confident you’re in the other 10%. I’m sure this would have been thought through somewhere before.
You may have misunderstood what is meant by “smart predictor”.
The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.
Furthermore a significant part of the essay explains in detail why many of the assumptions associated with Omega are problematic.
Edited to add that on rereading I can see how the bit where I say, “It doesn’t state whether Omega is sufficiently smart.” is a bit misleading. It should be read as a statement about the method of making the prediction not about Omega’s intelligence.
I have written a critique of the position that one boxing wins on Newcomb’s problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here
I don’t understand what the part about “fallible” and “infallible” agents is supposed to mean. If there is an “infallible” agent that makes the correct prediction 60% of the time and a “fallible” agent that makes the correct prediction 60% of the time, in what way should one anticipate them to behave differently?
It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.
As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.
http://wiki.lesswrong.com/wiki/Omega
Omega is assumed to be a “smart predictor”.
I’ve seen statements of Newcomb-like problems saying things like “Omega gets it right 90% of the time”. In that case it seems like it should matter whether it’s because of cosmic rays that affect all predictions equally, or whether it’s because he can only usefully predict the 90% of people who are easiest to predict, in which case if I’m not mistaken you can two-box if you’re confident you’re in the other 10%. I’m sure this would have been thought through somewhere before.
You may have misunderstood what is meant by “smart predictor”.
The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.
Furthermore a significant part of the essay explains in detail why many of the assumptions associated with Omega are problematic.
Edited to add that on rereading I can see how the bit where I say, “It doesn’t state whether Omega is sufficiently smart.” is a bit misleading. It should be read as a statement about the method of making the prediction not about Omega’s intelligence.