I think it’s important to distinguish between what I consider a True Oracle—an AI with no internal motivation system, including goal systems—and an AGI which has been designed to -behave- like an Oracle. A True Oracle AI is -not- a general intelligence.
The difference is that an AGI designed to behave like an oracle tries to figure out what you want, and gives it to you. A True Oracle is necessarily quite stupid. From the linked article by Eliezer, this quote from Holden “construct_utility_function(process_user_input()) is just a human-quality function for understanding what the speaker wants” represents the difference. Encapsulating utility into your Oracle means your Oracle is behaving more like an agent than a tool; it’s making decisions about what you want without consulting you about it.
In fact, as far as I define such things, we already have Oracle AIs. The computer itself is one; you tell it what your problem is, and it solves it for you. If it gives you the wrong answer, it’s entirely because your problem specification is incomplete or incorrect. When I read people’s discussions of Oracle AIs, what it seems they really want is an AI that can figure out what problem you’re -really- trying to solve, given a poorly-defined problem, and solve -that- for you.
“Safer doesn’t imply safe.”
I think it’s important to distinguish between what I consider a True Oracle—an AI with no internal motivation system, including goal systems—and an AGI which has been designed to -behave- like an Oracle. A True Oracle AI is -not- a general intelligence.
The difference is that an AGI designed to behave like an oracle tries to figure out what you want, and gives it to you. A True Oracle is necessarily quite stupid. From the linked article by Eliezer, this quote from Holden “construct_utility_function(process_user_input()) is just a human-quality function for understanding what the speaker wants” represents the difference. Encapsulating utility into your Oracle means your Oracle is behaving more like an agent than a tool; it’s making decisions about what you want without consulting you about it.
In fact, as far as I define such things, we already have Oracle AIs. The computer itself is one; you tell it what your problem is, and it solves it for you. If it gives you the wrong answer, it’s entirely because your problem specification is incomplete or incorrect. When I read people’s discussions of Oracle AIs, what it seems they really want is an AI that can figure out what problem you’re -really- trying to solve, given a poorly-defined problem, and solve -that- for you.
-That- is what is dangerous.