Agree, and I would add, even if the oracle doesn’t accidentally spawn a demon that tries to escape on its own, someone could pretty easily turn it into an agent just by driving it with an external event loop.
I.e., ask it what a hypothetical agent would do (with, say, a text interface to the Internet) and then forward its queries and return the results to the oracle, repeat.
With public access, someone will eventually try this. The conversion barrier is just not that high. Just asking an otherwise passive oracle to imagine what an agent might do just about instantiates one. If said imagined agent is sufficiently intelligent, it might not take very many exchanges to do real harm or even FOOM, and if the loop is automated (say, a shell script) rather than a human driving each step manually, it could potentially do a lot of exchanges on a very short time scale, potentially making a somewhat less intelligent agent powerful enough to be dangerous.
I highly doubt the current Bing AI is yet smart enough to create an agent smart enough to be very dangerous (much less FOOM), but it is an oracle, with all that implies. It could be turned into an agent, and such an agent will almost certainly not be aligned. It would only be relatively harmless because it is relatively weak/stupid.
Agree, and I would add, even if the oracle doesn’t accidentally spawn a demon that tries to escape on its own, someone could pretty easily turn it into an agent just by driving it with an external event loop.
I.e., ask it what a hypothetical agent would do (with, say, a text interface to the Internet) and then forward its queries and return the results to the oracle, repeat.
With public access, someone will eventually try this. The conversion barrier is just not that high. Just asking an otherwise passive oracle to imagine what an agent might do just about instantiates one. If said imagined agent is sufficiently intelligent, it might not take very many exchanges to do real harm or even FOOM, and if the loop is automated (say, a shell script) rather than a human driving each step manually, it could potentially do a lot of exchanges on a very short time scale, potentially making a somewhat less intelligent agent powerful enough to be dangerous.
I highly doubt the current Bing AI is yet smart enough to create an agent smart enough to be very dangerous (much less FOOM), but it is an oracle, with all that implies. It could be turned into an agent, and such an agent will almost certainly not be aligned. It would only be relatively harmless because it is relatively weak/stupid.
Update: See ChaosGPT and Auto-GPT.