You’d certainly want the other guy’s Oracle not to answer certain questions; but what you want from your Oracle is pretty much the same.
But the title of your post talks about how a safe Oracle AI is easier than a safe general AI. Whose questions would be safe to answer?
If an Oracle AI could be used to help spawn friendly AI then it might be a possibility to consider, but under no circumstances I would call it safe as long as it isn’t already friendly.
Relying upon humans to ask the right questions, how long is that going to work out until someone asks a question that returns dangerous knowledge?
You’d be basically forced to ask dangerous questions anyway because once you can build an Oracle AI you would have to expect others to be able to build one too and ask stupid questions.
But the title of your post talks about how a safe Oracle AI is easier than a safe general AI. Whose questions would be safe to answer?
If an Oracle AI could be used to help spawn friendly AI then it might be a possibility to consider, but under no circumstances I would call it safe as long as it isn’t already friendly.
Relying upon humans to ask the right questions, how long is that going to work out until someone asks a question that returns dangerous knowledge?
You’d be basically forced to ask dangerous questions anyway because once you can build an Oracle AI you would have to expect others to be able to build one too and ask stupid questions.
If we had a truly safe oracle, we could ask it questions about the consequences of doing certain thing, and knowing certain things.
I can see society adapting stably to a safe oracle without needing it to be friendly.