If the AI is deliberately making things happen in the world, then I would say it’s not an Oracle, it’s an Agent whose I/O channel happens to involve answering questions. (Maybe the programmers intended to make an Oracle, but evidently they failed!)
“If you have an aligned Oracle, then you wouldn’t ask it to predict unpredictable things. Instead you would ask it to print out plans to solve problems—and then it would come up with plans that do not rely on predicting unpredictable things.”
It seems like your comment is saying something like:
These restrictions are more relevant to an Oracle than to other kinds of AI.
Even an Oracle can act by answering questions in whatever way will get people to further its intentions.
If the AI is deliberately making things happen in the world, then I would say it’s not an Oracle, it’s an Agent whose I/O channel happens to involve answering questions. (Maybe the programmers intended to make an Oracle, but evidently they failed!)
My response to @Jeffrey Heninger would have been instead:
“If you have an aligned Oracle, then you wouldn’t ask it to predict unpredictable things. Instead you would ask it to print out plans to solve problems—and then it would come up with plans that do not rely on predicting unpredictable things.”