I don’t see anything in this to put a useful limit on what a superintelligence can do. We humans also have to deal with chaotic systems such as the weather. We respond not simply by trying to predict the weather better and better (although that helps so far as it goes) but by developing ways of handling whatever weather happens. In your own example of pinball, the expert player tries to keep the machine in a region of state space where the player can take actions to keep it in that space, avoiding the more chaotic regions.
Intelligence is not about being a brain in a vat and predicting things. It is about doing things to funnel the probability mass of the universe in intended directions.
If the AI is deliberately making things happen in the world, then I would say it’s not an Oracle, it’s an Agent whose I/O channel happens to involve answering questions. (Maybe the programmers intended to make an Oracle, but evidently they failed!)
“If you have an aligned Oracle, then you wouldn’t ask it to predict unpredictable things. Instead you would ask it to print out plans to solve problems—and then it would come up with plans that do not rely on predicting unpredictable things.”
I don’t see anything in this to put a useful limit on what a superintelligence can do. We humans also have to deal with chaotic systems such as the weather. We respond not simply by trying to predict the weather better and better (although that helps so far as it goes) but by developing ways of handling whatever weather happens. In your own example of pinball, the expert player tries to keep the machine in a region of state space where the player can take actions to keep it in that space, avoiding the more chaotic regions.
Intelligence is not about being a brain in a vat and predicting things. It is about doing things to funnel the probability mass of the universe in intended directions.
It seems like your comment is saying something like:
These restrictions are more relevant to an Oracle than to other kinds of AI.
Even an Oracle can act by answering questions in whatever way will get people to further its intentions.
If the AI is deliberately making things happen in the world, then I would say it’s not an Oracle, it’s an Agent whose I/O channel happens to involve answering questions. (Maybe the programmers intended to make an Oracle, but evidently they failed!)
My response to @Jeffrey Heninger would have been instead:
“If you have an aligned Oracle, then you wouldn’t ask it to predict unpredictable things. Instead you would ask it to print out plans to solve problems—and then it would come up with plans that do not rely on predicting unpredictable things.”