An oracle AI is just moving the problem to that of structuring the queries so it answers the question you thought you asked, as opposed to the question you asked.
This solves that problem. The AI tries to produce an answer it thinks you will approve of, and which mimics the output of another human.
The “human” criteria is as ill-defined as any control mechanism
We don’t need to define “humans” because we have tons of examples. And we reduce the problem to prediction, which is something AIs can be told to do.
Oh. Well if we have enough examples that we don’t need to define it, just create a few human-like AIs—don’t worry about all that superintelligence nonsense, we can just create human-like AIs and run them faster. If we have enough insight into humans to be able to tell an AI how to predict them, it should be trivial to just skip the “tell an AI” part and predict what a human would come up with.
AI solved.
Or maybe you’re hiding complexity behind definitions.
This solves that problem. The AI tries to produce an answer it thinks you will approve of, and which mimics the output of another human.
We don’t need to define “humans” because we have tons of examples. And we reduce the problem to prediction, which is something AIs can be told to do.
Oh. Well if we have enough examples that we don’t need to define it, just create a few human-like AIs—don’t worry about all that superintelligence nonsense, we can just create human-like AIs and run them faster. If we have enough insight into humans to be able to tell an AI how to predict them, it should be trivial to just skip the “tell an AI” part and predict what a human would come up with.
AI solved.
Or maybe you’re hiding complexity behind definitions.