After reading this, I’ve become pretty sure that I have a huge inferential gap in relation to this problem. I attempted to work it out in my head, and I may have gotten somewhere, but I’m not sure where.
1: “Assume we have a machine whose goal is accurate answers to any and all questions. We’ll call it an Oracle AI.”
2: “Oh. wouldn’t that cause various physical safety problems? You know, like taking over the world and such?”
1: “No, we’re just going to assume it won’t do that.”
2: “Oh, Okay.”
1: “How do we know it doesn’t have hidden goals and won’t give innacurate answers?”
2: “But the defining assumption of an Oracle AI that it’s goal is to provide accurate answers to questions.”
1: “Assume we don’t have that assumption.”
2: “So we DON’T have an oracle AI?”
1: “No, we have an Oracle AI, it’s just not proven to be honest or to actually have answering a question as it’s only goal.
2: “But that was the definition… That we assumed? In what sense do we HAVE an Oracle AI when it’s definition includes both A and Not A? I’m utterly lost.”
1: We’re essentially trying to establish an Oracle AI Prover. To prove whether the Oracle AI is accurate or not.
2: Wait, I have an idea. Gödel’s incompleteness theorems. The Oracle can answer ANY and ALL questions, but there must have at least one thing which is true that it can’t prove. What if, in this case, it was it’s trustworthiness. A system which could prove it is trustworthy would have to be able to NOT prove something else, and the Oracle AI is stipulated to be able to answer any questions, which would seem to mean it’s stipulated that it can prove everything else. Except it’s trustworthiness.
1: No, I mean, we’re assuming that the Oracle AI CAN prove it’s trustworthiness SOMEHOW.
2: But then, wouldn’t Gödel’s incompleteness theorems mean it would have to NOT be able to prove something else? But then it’s not an Oracle again, isn’t it?
I’ll keep thinking about this. But thank you for the thought provoking question!
After reading this, I’ve become pretty sure that I have a huge inferential gap in relation to this problem. I attempted to work it out in my head, and I may have gotten somewhere, but I’m not sure where.
1: “Assume we have a machine whose goal is accurate answers to any and all questions. We’ll call it an Oracle AI.”
2: “Oh. wouldn’t that cause various physical safety problems? You know, like taking over the world and such?”
1: “No, we’re just going to assume it won’t do that.”
2: “Oh, Okay.”
1: “How do we know it doesn’t have hidden goals and won’t give innacurate answers?”
2: “But the defining assumption of an Oracle AI that it’s goal is to provide accurate answers to questions.”
1: “Assume we don’t have that assumption.”
2: “So we DON’T have an oracle AI?”
1: “No, we have an Oracle AI, it’s just not proven to be honest or to actually have answering a question as it’s only goal.
2: “But that was the definition… That we assumed? In what sense do we HAVE an Oracle AI when it’s definition includes both A and Not A? I’m utterly lost.”
1: We’re essentially trying to establish an Oracle AI Prover. To prove whether the Oracle AI is accurate or not.
2: Wait, I have an idea. Gödel’s incompleteness theorems. The Oracle can answer ANY and ALL questions, but there must have at least one thing which is true that it can’t prove. What if, in this case, it was it’s trustworthiness. A system which could prove it is trustworthy would have to be able to NOT prove something else, and the Oracle AI is stipulated to be able to answer any questions, which would seem to mean it’s stipulated that it can prove everything else. Except it’s trustworthiness.
1: No, I mean, we’re assuming that the Oracle AI CAN prove it’s trustworthiness SOMEHOW.
2: But then, wouldn’t Gödel’s incompleteness theorems mean it would have to NOT be able to prove something else? But then it’s not an Oracle again, isn’t it?
I’ll keep thinking about this. But thank you for the thought provoking question!