OK, OK, yes, there are lots of issues with Oracle AIs. But I think most of the posts here are avoiding the question.
I can readily imagine the scenario where we’ve come up with logical properties that would soundly keep the AI from leaving its box, and model-checked the software and hardware to prove those properties of the Oracle AI. We ensure that the only actual information leaving the Oracle AI is the oracle’s answers to our queries. This is difficult, but it doesn’t seem impossible—and, in fact, it’s rather easier to do this than to prove that the Oracle AI is friendly. That’s why we’d make an Oracle AI in the first place.
If I understand the problem’s setting correctly, by positing that we have an Oracle AI, we assume the above kinds of conditions. We don’t assume that the AI is honest, or that its goals are aligned with our interests.
After reading this, I’ve become pretty sure that I have a huge inferential gap in relation to this problem. I attempted to work it out in my head, and I may have gotten somewhere, but I’m not sure where.
1: “Assume we have a machine whose goal is accurate answers to any and all questions. We’ll call it an Oracle AI.”
2: “Oh. wouldn’t that cause various physical safety problems? You know, like taking over the world and such?”
1: “No, we’re just going to assume it won’t do that.”
2: “Oh, Okay.”
1: “How do we know it doesn’t have hidden goals and won’t give innacurate answers?”
2: “But the defining assumption of an Oracle AI that it’s goal is to provide accurate answers to questions.”
1: “Assume we don’t have that assumption.”
2: “So we DON’T have an oracle AI?”
1: “No, we have an Oracle AI, it’s just not proven to be honest or to actually have answering a question as it’s only goal.
2: “But that was the definition… That we assumed? In what sense do we HAVE an Oracle AI when it’s definition includes both A and Not A? I’m utterly lost.”
1: We’re essentially trying to establish an Oracle AI Prover. To prove whether the Oracle AI is accurate or not.
2: Wait, I have an idea. Gödel’s incompleteness theorems. The Oracle can answer ANY and ALL questions, but there must have at least one thing which is true that it can’t prove. What if, in this case, it was it’s trustworthiness. A system which could prove it is trustworthy would have to be able to NOT prove something else, and the Oracle AI is stipulated to be able to answer any questions, which would seem to mean it’s stipulated that it can prove everything else. Except it’s trustworthiness.
1: No, I mean, we’re assuming that the Oracle AI CAN prove it’s trustworthiness SOMEHOW.
2: But then, wouldn’t Gödel’s incompleteness theorems mean it would have to NOT be able to prove something else? But then it’s not an Oracle again, isn’t it?
I’ll keep thinking about this. But thank you for the thought provoking question!
You could start by asking it questions to which the answers are already known. The Oracle never knows whether the question you’re asking is just a test of its honesty or a real request for new insight.
OK, OK, yes, there are lots of issues with Oracle AIs. But I think most of the posts here are avoiding the question.
I can readily imagine the scenario where we’ve come up with logical properties that would soundly keep the AI from leaving its box, and model-checked the software and hardware to prove those properties of the Oracle AI. We ensure that the only actual information leaving the Oracle AI is the oracle’s answers to our queries. This is difficult, but it doesn’t seem impossible—and, in fact, it’s rather easier to do this than to prove that the Oracle AI is friendly. That’s why we’d make an Oracle AI in the first place.
If I understand the problem’s setting correctly, by positing that we have an Oracle AI, we assume the above kinds of conditions. We don’t assume that the AI is honest, or that its goals are aligned with our interests.
Under these conditions, what can you ask?
After reading this, I’ve become pretty sure that I have a huge inferential gap in relation to this problem. I attempted to work it out in my head, and I may have gotten somewhere, but I’m not sure where.
1: “Assume we have a machine whose goal is accurate answers to any and all questions. We’ll call it an Oracle AI.”
2: “Oh. wouldn’t that cause various physical safety problems? You know, like taking over the world and such?”
1: “No, we’re just going to assume it won’t do that.”
2: “Oh, Okay.”
1: “How do we know it doesn’t have hidden goals and won’t give innacurate answers?”
2: “But the defining assumption of an Oracle AI that it’s goal is to provide accurate answers to questions.”
1: “Assume we don’t have that assumption.”
2: “So we DON’T have an oracle AI?”
1: “No, we have an Oracle AI, it’s just not proven to be honest or to actually have answering a question as it’s only goal.
2: “But that was the definition… That we assumed? In what sense do we HAVE an Oracle AI when it’s definition includes both A and Not A? I’m utterly lost.”
1: We’re essentially trying to establish an Oracle AI Prover. To prove whether the Oracle AI is accurate or not.
2: Wait, I have an idea. Gödel’s incompleteness theorems. The Oracle can answer ANY and ALL questions, but there must have at least one thing which is true that it can’t prove. What if, in this case, it was it’s trustworthiness. A system which could prove it is trustworthy would have to be able to NOT prove something else, and the Oracle AI is stipulated to be able to answer any questions, which would seem to mean it’s stipulated that it can prove everything else. Except it’s trustworthiness.
1: No, I mean, we’re assuming that the Oracle AI CAN prove it’s trustworthiness SOMEHOW.
2: But then, wouldn’t Gödel’s incompleteness theorems mean it would have to NOT be able to prove something else? But then it’s not an Oracle again, isn’t it?
I’ll keep thinking about this. But thank you for the thought provoking question!
You need to be able to check the answer, even though the AI was needed to generate it.
You could start by asking it questions to which the answers are already known. The Oracle never knows whether the question you’re asking is just a test of its honesty or a real request for new insight.