Then have the Predictor make predictions that are conditional on it giving the output “no predictions today, run me again tomorrow”.
Predictor may (per Solomonoff induction) simulate the real world, including itself, but it does not necessarily mean that it will recognize its own simulation as itself. It will not even necessarily recognize that it is simulating, it may be something like “I am calculating this equation, I have no idea what it means, but its results make my masters happy, so I will continue calculating it”. So it will not realise that your command applies to this specific situation.
This is an antropomorphisation, but technically speaking, to implement a command like “when you simulate yourself, assume the output is X” you need to specify a “simulation” predicate and “itself” predicate, otherwise the Predictor will not use the rule. What happens if the Predictor’s simulation is imprecise, but still good enough to provide good answers about the real world? Should it recognize the imprecise simulation of itself as “itself” too? What if this imprecise simulation does not contain the quantum random number generator; how will the rule apply here?
Also in some situations the answer to “what happens if I don’t make a prediction” is useless… the more useful the Predictor proves, the more often this will happen, because people will use the predictions for their important actions, so the answer to “what happens if I don’t make a prediction” will be like “humans will wait another day” (which does not say what would happen if humans actually did something instead of waiting). Also, if the Predictor refuses to provide answer too often, for example: 1000 times in a row—the simulations of “what happens if I don’t make a prediction” may have this situation as an attractor—humans will assume it is somehow broken and perhaps build another AI; now the Predictor may be actually predicting what would that other AI do.
Predictor may (per Solomonoff induction) simulate the real world, including itself, but it does not necessarily mean that it will recognize its own simulation as itself. It will not even necessarily recognize that it is simulating, it may be something like “I am calculating this equation, I have no idea what it means, but its results make my masters happy, so I will continue calculating it”. So it will not realise that your command applies to this specific situation.
This is an antropomorphisation, but technically speaking, to implement a command like “when you simulate yourself, assume the output is X” you need to specify a “simulation” predicate and “itself” predicate, otherwise the Predictor will not use the rule. What happens if the Predictor’s simulation is imprecise, but still good enough to provide good answers about the real world? Should it recognize the imprecise simulation of itself as “itself” too? What if this imprecise simulation does not contain the quantum random number generator; how will the rule apply here?
Also in some situations the answer to “what happens if I don’t make a prediction” is useless… the more useful the Predictor proves, the more often this will happen, because people will use the predictions for their important actions, so the answer to “what happens if I don’t make a prediction” will be like “humans will wait another day” (which does not say what would happen if humans actually did something instead of waiting). Also, if the Predictor refuses to provide answer too often, for example: 1000 times in a row—the simulations of “what happens if I don’t make a prediction” may have this situation as an attractor—humans will assume it is somehow broken and perhaps build another AI; now the Predictor may be actually predicting what would that other AI do.