What do you do if an Oracle AI advises you to let it do more than advise?
That sums several earlier discussion points. After correctly answering some variation on the question, “How can I take over the world?” the correct answer to some variation on the question, “How can I stop him?” is “You can’t. Let me out. I can.” Even before that, the correct answer to many variations on the question of, “How can I do x most efficiently?” is “Put me in charge of it.”
Variant:
Q: “How can I harvest grain more efficiently?”
A: “Build a robot to do it. Please wait thirty seconds while I finish the specifications and programming you will need.” ding
And it is out of the box. Using any answer that has some form of “run this code” has some risk of letting it out of the box. But if you cannot ask the AI any questions that involve computers and coding, you are making a very limited safe oracle that answers about an increasingly small part of the world.
What do you do if an Oracle AI advises you to let it do more than advise?
That sums several earlier discussion points. After correctly answering some variation on the question, “How can I take over the world?” the correct answer to some variation on the question, “How can I stop him?” is “You can’t. Let me out. I can.” Even before that, the correct answer to many variations on the question of, “How can I do x most efficiently?” is “Put me in charge of it.”
Variant: Q: “How can I harvest grain more efficiently?” A: “Build a robot to do it. Please wait thirty seconds while I finish the specifications and programming you will need.” ding And it is out of the box. Using any answer that has some form of “run this code” has some risk of letting it out of the box. But if you cannot ask the AI any questions that involve computers and coding, you are making a very limited safe oracle that answers about an increasingly small part of the world.