Implementing computer networks that would be secure even against smart human attackers, let alone against superhuman intelligences, is an impossible goal. Human minds, whether operating in isolation or in large cooperative organizations, are simply unable to reason reliably at that level of complexity. It would be an even harder task than writing reliably bug-free large software projects or designing reliably bug-free state-of-the-art microprocessors—goals that humans already find unreachable in practice.
The only ways to avoid being hacked are: (1) to keep your computer offline, (2) to be an uninteresting target that’s not worth the effort, and (3) to have good forensics and threaten draconian punishments against hackers. Clearly, only (1) is a solution applicable to the problem of keeping AIs boxed, but then we get to the problem of social engineering.
Implementing computer networks that would be secure even against smart human attackers, let alone against superhuman intelligences, is an impossible goal.
Implementing computer networks that would be secure even against smart human attackers, let alone against superhuman intelligences, is an impossible goal. Human minds, whether operating in isolation or in large cooperative organizations, are simply unable to reason reliably at that level of complexity. It would be an even harder task than writing reliably bug-free large software projects or designing reliably bug-free state-of-the-art microprocessors—goals that humans already find unreachable in practice.
The only ways to avoid being hacked are: (1) to keep your computer offline, (2) to be an uninteresting target that’s not worth the effort, and (3) to have good forensics and threaten draconian punishments against hackers. Clearly, only (1) is a solution applicable to the problem of keeping AIs boxed, but then we get to the problem of social engineering.
Yes, well so is creating a friendly AI.
Now, shut up and do the impossible