As I keep pointing out, “utility indifference” is exactly the same as giving an infinitely strong prior.
So you might as well say “we gave the AI an infinitely strong prior that the event wasn’t a test.”
What does a reasoner do when it sees evidence that it’s being tested, given that it has an infinitely strong prior that it isn’t? It’s going to behave pretty unpredictably, probably concluding it is fundamentally mistaken about the world around it. Utility indifference results in equally unpredictable behavior. Now it’s not because the agent believes it is fundamentally mistaken, but because it only cares about what happens conditioned on being fundamentally mistaken.
I’ve used utility indifference to avoid that. The AI is perfectly aware that these are, in fact, tests. see http://lesswrong.com/r/discussion/lw/mrz/ask_and_ye_shall_be_answered/ and http://lesswrong.com/r/discussion/lw/lyh/utility_vs_probability_idea_synthesis/ and ask me if you have further questions.
As I keep pointing out, “utility indifference” is exactly the same as giving an infinitely strong prior.
So you might as well say “we gave the AI an infinitely strong prior that the event wasn’t a test.”
What does a reasoner do when it sees evidence that it’s being tested, given that it has an infinitely strong prior that it isn’t? It’s going to behave pretty unpredictably, probably concluding it is fundamentally mistaken about the world around it. Utility indifference results in equally unpredictable behavior. Now it’s not because the agent believes it is fundamentally mistaken, but because it only cares about what happens conditioned on being fundamentally mistaken.
You can arrange things to minimise the problems—for instance having the outcome X or ¬X be entirely random, and independent of the AI’s impacts.