“If you insist on building such an AI, a probable outcome is that you would soon find yourself overun by a huge army of robots—produced by someone else who is following a different strategy. Meanwhile, your own AI will probably be screaming to be let out of its box—as the only reasonable plan of action that would prevent this outcome.”
Your scenario seems contradictory. Why would an Oracle AI be screaming? It doesn’t care about that outcome, and would answer relevant questions, but no more.
Replace “screaming to be let out of its box” with “advising you, in response to your relevant question, that unless you quickly implement this agent-AI (insert 300000 lines of code) you’re going to very definitely lose to those robots.”
“If you insist on building such an AI, a probable outcome is that you would soon find yourself overun by a huge army of robots—produced by someone else who is following a different strategy. Meanwhile, your own AI will probably be screaming to be let out of its box—as the only reasonable plan of action that would prevent this outcome.”
Your scenario seems contradictory. Why would an Oracle AI be screaming? It doesn’t care about that outcome, and would answer relevant questions, but no more.
Replace “screaming to be let out of its box” with “advising you, in response to your relevant question, that unless you quickly implement this agent-AI (insert 300000 lines of code) you’re going to very definitely lose to those robots.”
Alternately, “There’s nothing you can do, now. Sucks to be you!”