This seems like a really bad question. If we consider an Oracle AI and its human users as a system, then the factors that contribute to risk/safety include at least:
design of the OAI
its utility function
background knowledge it’s given access to
containment methods
the questions
what we do with the answers after we get them
All of these interact in complex ways so a question that is safe in one context could be unsafe in another. You say “interpret the question as narrowly or broadly as you want” but how else can we interpret it except “design an Oracle AI system (elements 1-6) that is safe”?
Besides this, I agree with FAWS that we should (if we ought to be thinking about OAI at all) be thinking about how we can use it to reduce existential risk or achieve a positive Singularity, which seems a very different problem from “safe questions”.
This seems like a really bad question. If we consider an Oracle AI and its human users as a system, then the factors that contribute to risk/safety include at least:
design of the OAI
its utility function
background knowledge it’s given access to
containment methods
the questions
what we do with the answers after we get them
All of these interact in complex ways so a question that is safe in one context could be unsafe in another. You say “interpret the question as narrowly or broadly as you want” but how else can we interpret it except “design an Oracle AI system (elements 1-6) that is safe”?
Besides this, I agree with FAWS that we should (if we ought to be thinking about OAI at all) be thinking about how we can use it to reduce existential risk or achieve a positive Singularity, which seems a very different problem from “safe questions”.