Would there be any unintended consequences? I’m worried that possessing an incorrect belief may lead the Oracle to lose accuracy in other areas.
For instance, if accuracy is defined in terms of the reaction of the first person to read the output, and that person is isolated from the rest of the world, then we can get the Oracle to act as if it believed a nuclear bomb was due to go off before the person could communicate with the rest of the world.
In this example, would the imminent nuclear threat affect the Oracle’s reasoning process? I’m sure there are some questions whose answers could vary depending on the likelihood of a nuclear detonation in the near future.
Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.
“What will the world look like when our source of super-accurate predictions suddenly disappears” is not usually the question we’d really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?
We might not even need to program in “imminent nuclear threat” manually. “What will our enemies do when our military defenses are suddenly in chaos due to a vanished oracle?”
Would there be any unintended consequences? I’m worried that possessing an incorrect belief may lead the Oracle to lose accuracy in other areas.
In this example, would the imminent nuclear threat affect the Oracle’s reasoning process? I’m sure there are some questions whose answers could vary depending on the likelihood of a nuclear detonation in the near future.
The Oracle does not possess inaccurate beliefs. Look at http://lesswrong.com/lw/ltf/false_thermodynamic_miracles/ and http://lesswrong.com/r/discussion/lw/lyh/utility_vs_probability_idea_synthesis/ . Note I’ve always very carefully phrased it as “act as if it believed” rather than “believed”.
Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.
“What will the world look like when our source of super-accurate predictions suddenly disappears” is not usually the question we’d really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?
We might not even need to program in “imminent nuclear threat” manually. “What will our enemies do when our military defenses are suddenly in chaos due to a vanished oracle?”