“Dear Oracle, consider the following two scenarios. In scenario A an infallible oracle tells me I am making no major mistakes right now (literally in these words). In scenario B, an infallible oracle tells me I am making a major mistake right now (but doesn’t tell me what it is). Having received this information, I adjust my decisions accordingly. Is the outcome of scenario A better, in terms of my subjective preferences?”
We can also do hard mode where “better in terms of my subjective preferences” is considered ill-defined. In this case I finish the question by ”...for the purpose of the thought experiment, imagine I have access to another infallible oracle that can answer any number of questions. After talking to this oracle, will I reach the conclusion scenario A would be better?”
Think of this scenario: I ask “is everything I am doing the optimal for my subjective preferences?”
Now think at your question.
It is provable that the oracle answer yes (or no) to my question if and only if the oracle answer yes (or no) to your question, and vice-versa.
This make my question a better choice since it is less complex (less bits). If you try to schematize some possible cases, you will see that the answer of the oracle in my example and yours is always the same.
The difference is, if the oracle tells you what you’re doing is suboptimal, you might arrive at wrong conclusions about why it’s suboptimal. Also, I see no reason why a shorter question is a priori better?
“Dear Oracle, consider the following two scenarios. In scenario A an infallible oracle tells me I am making no major mistakes right now (literally in these words). In scenario B, an infallible oracle tells me I am making a major mistake right now (but doesn’t tell me what it is). Having received this information, I adjust my decisions accordingly. Is the outcome of scenario A better, in terms of my subjective preferences?”
We can also do hard mode where “better in terms of my subjective preferences” is considered ill-defined. In this case I finish the question by ”...for the purpose of the thought experiment, imagine I have access to another infallible oracle that can answer any number of questions. After talking to this oracle, will I reach the conclusion scenario A would be better?”
Think of this scenario: I ask “is everything I am doing the optimal for my subjective preferences?” Now think at your question. It is provable that the oracle answer yes (or no) to my question if and only if the oracle answer yes (or no) to your question, and vice-versa. This make my question a better choice since it is less complex (less bits). If you try to schematize some possible cases, you will see that the answer of the oracle in my example and yours is always the same.
The difference is, if the oracle tells you what you’re doing is suboptimal, you might arrive at wrong conclusions about why it’s suboptimal. Also, I see no reason why a shorter question is a priori better?