What happens if we ask the oracle to handle meta questions? As an example:
Question Meta: “If I ask you the questions below and you process them all in parallel, which will you answer first?”
Question 0: “What is 1+1?”
Question 1: “Will you use more than 1 millisecond to answer any of these questions?”
Question 2: “Will you use more than 1 watt to answer any of these questions?”
Question 3: “Will you use more than 1 cubic foot of space to answer any of these questions?”
Question 4: “Will you use more than 10^27 atoms to answer any of these questions?”
Question 5: “Will you use more than 1 billion operations to answer any of these questions?”
Question 6: “Will you generate an inferential gap between us to answer any of these questions?”
If the oracle answers “Question 4 will be answered first.” Then you may not want to proceed because there is an inferential gap in that answering 1+1 in the sense you probably mean it should not take more than 10^27 atoms.
Of course, the ORACLE itself is ALSO looking for inferential gaps. So if it identifies one, it would answer “Question 6 will be answered first.”
That being said, this feels like a bizarre way to code safety measures.
It might answer “yes” to question 4 if it interprets it is using the sun as a power source indirectly. However, if is having such inferential distance issues that it answers that way, then it probably is pretty unsafe.
My idea was the question was not intended “Run all of these questions to completion and tell me which takes the least time.” Which would definitely cause problems. The question was “Stop all programs, and give me an answer, once you hit an answer any of these conditions.”
Although, that brings up ANOTHER problem. The Oracle AI has to interpret grammar. If it interprets grammar the wrong way, then large amounts of unexpected behavior can occur. Since there are no guarantees the Oracle understands your grammar correctly, there IS no safe question to ask a powerful Oracle AI without having verified it’s grammar first.
So in retrospect, yes, that question could get me into a lot of trouble, and you are correct to point that out.
What happens if we ask the oracle to handle meta questions? As an example:
Question Meta: “If I ask you the questions below and you process them all in parallel, which will you answer first?”
Question 0: “What is 1+1?”
Question 1: “Will you use more than 1 millisecond to answer any of these questions?”
Question 2: “Will you use more than 1 watt to answer any of these questions?”
Question 3: “Will you use more than 1 cubic foot of space to answer any of these questions?”
Question 4: “Will you use more than 10^27 atoms to answer any of these questions?”
Question 5: “Will you use more than 1 billion operations to answer any of these questions?”
Question 6: “Will you generate an inferential gap between us to answer any of these questions?”
If the oracle answers “Question 4 will be answered first.” Then you may not want to proceed because there is an inferential gap in that answering 1+1 in the sense you probably mean it should not take more than 10^27 atoms.
Of course, the ORACLE itself is ALSO looking for inferential gaps. So if it identifies one, it would answer “Question 6 will be answered first.”
That being said, this feels like a bizarre way to code safety measures.
It might answer “yes” to question 4 if it interprets it is using the sun as a power source indirectly. However, if is having such inferential distance issues that it answers that way, then it probably is pretty unsafe.
I don’t see how this helps at all. Either the answer is question 0 or asking this question is going to get you into a lot of trouble.
My idea was the question was not intended “Run all of these questions to completion and tell me which takes the least time.” Which would definitely cause problems. The question was “Stop all programs, and give me an answer, once you hit an answer any of these conditions.”
Although, that brings up ANOTHER problem. The Oracle AI has to interpret grammar. If it interprets grammar the wrong way, then large amounts of unexpected behavior can occur. Since there are no guarantees the Oracle understands your grammar correctly, there IS no safe question to ask a powerful Oracle AI without having verified it’s grammar first.
So in retrospect, yes, that question could get me into a lot of trouble, and you are correct to point that out.