Even if the AI can modify its code, it can’t really do anything that wasn’t entailed by its original programming.
(Ok, it could have a security vulnerability that allowed the execution of externally-injected malicious code, but that is a general issue of all computer systems with an external digital connection)
Even if the AI can modify its code, it can’t really do anything that wasn’t entailed by its original programming.
(Ok, it could have a security vulnerability that allowed the execution of externally-injected malicious code, but that is a general issue of all computer systems with an external digital connection)
The hard part is predicting everything that was entailed by its initial programing and making sure it’s all safe.
That’s right, history of engineering tells us that “provably safe” and “provably secure” systems fail in unanticipated ways.