Security is solving the problem after the fact, and I think that is totally the wrong approach here, we should be asking if something can be designed into the AI that prevents people from wanting to take the AI over or prevents takeovers from being disastrous (three suggestions for that are included in this comment).
Perhaps the best approach to security is to solve the problems humans are having that cause them to commit crimes. Of course this appears to be a chicken-or-egg proposition “Well, the AI can’t solve the problems until it’s securely built, and it won’t be securely built until it’s solved the problems.” but we could look at this problem in several other ways. Each of these are with the assumption that it is safest to deem the security problem impossible to solve after the fact:
Make a lesser AI that is just powerful enough to address the root reasons humans cause security issues—the fact that it is less powerful may also make it a lot less dangerous in the event that it’s taken over.
Give everyone access to AI. If everyone already has access, then there will be no big jackpot to steal. If everyone else has access to AI, too, then taking one over wouldn’t buy you any power. This is a paradigm shift from what you guys are suggesting, and would require that much of this is gone about in a different way. At first it looks like this would exacerbate security problems—won’t people use them for bad purposes? Of course. But if EVERYONE has access, that won’t be any scarier than a situation in which a gunman wants to take over, and everyone else has a gun. You can’t take over a room group of equally armed people. Imagine taking out a gun in a room full of people with guns. You’ll end up getting shot—you won’t be taking over. One AI can’t take over a world full of AI. Distributing the power everywhere would prevent many security problems ranging from deterring criminals from committing crimes (and limiting the damage criminals can do) to checking and balancing big powers to prevent tyranny to preventing the corruption of good people that may otherwise happen if they’re given too much power to preventing a situation similar to Zimbardo’s prison experiment where powerful AI builders feel a need to defend themselves from hackers, criminals and tyrants and therefore begin using the power they have in an oppressive way. {reason four}
Focus on enlightenment. The human race has far too much power and far too little wisdom. Increasing the power of toddlers won’t solve toddler’s issues but only increase the damage the toddlers do to one another because of their issues. And so, increasing the power of an unenlightened human race will only amplify their pain. If we all practice non-attachment, we will be strong enough to heed “those who sacrifice freedom for security deserve neither” and hopefully cease the addiction to violating freedom, either by taking advantage of others or sacrificing our own, in order to gain more security. Might it make more sense to focus on enlightenment than on throwing power at an unenlightened species in an attempt to solve their problems? Maybe we should be asking “How can technology help us become more enlightened?”
“Intelligent people solve problems, geniuses prevent them.”—Einstein
I think AI builders need to do like the geniuses. Prevent people from WANTING to take over the AI or prevent a takeover from being a big deal. There is nothing you can do guarantee it won’t be taken over, not with the level of certainty that we’d need to have for a problem that could be this devastating to the entire world. The only way this much power can exist, and be safe from humans, is to address the human element itself.
Security is solving the problem after the fact, and I think that is totally the wrong approach here, we should be asking if something can be designed into the AI that prevents people from wanting to take the AI over or prevents takeovers from being disastrous (three suggestions for that are included in this comment).
Perhaps the best approach to security is to solve the problems humans are having that cause them to commit crimes. Of course this appears to be a chicken-or-egg proposition “Well, the AI can’t solve the problems until it’s securely built, and it won’t be securely built until it’s solved the problems.” but we could look at this problem in several other ways. Each of these are with the assumption that it is safest to deem the security problem impossible to solve after the fact:
Make a lesser AI that is just powerful enough to address the root reasons humans cause security issues—the fact that it is less powerful may also make it a lot less dangerous in the event that it’s taken over.
Give everyone access to AI. If everyone already has access, then there will be no big jackpot to steal. If everyone else has access to AI, too, then taking one over wouldn’t buy you any power. This is a paradigm shift from what you guys are suggesting, and would require that much of this is gone about in a different way. At first it looks like this would exacerbate security problems—won’t people use them for bad purposes? Of course. But if EVERYONE has access, that won’t be any scarier than a situation in which a gunman wants to take over, and everyone else has a gun. You can’t take over a room group of equally armed people. Imagine taking out a gun in a room full of people with guns. You’ll end up getting shot—you won’t be taking over. One AI can’t take over a world full of AI. Distributing the power everywhere would prevent many security problems ranging from deterring criminals from committing crimes (and limiting the damage criminals can do) to checking and balancing big powers to prevent tyranny to preventing the corruption of good people that may otherwise happen if they’re given too much power to preventing a situation similar to Zimbardo’s prison experiment where powerful AI builders feel a need to defend themselves from hackers, criminals and tyrants and therefore begin using the power they have in an oppressive way. {reason four}
Focus on enlightenment. The human race has far too much power and far too little wisdom. Increasing the power of toddlers won’t solve toddler’s issues but only increase the damage the toddlers do to one another because of their issues. And so, increasing the power of an unenlightened human race will only amplify their pain. If we all practice non-attachment, we will be strong enough to heed “those who sacrifice freedom for security deserve neither” and hopefully cease the addiction to violating freedom, either by taking advantage of others or sacrificing our own, in order to gain more security. Might it make more sense to focus on enlightenment than on throwing power at an unenlightened species in an attempt to solve their problems? Maybe we should be asking “How can technology help us become more enlightened?”
“Intelligent people solve problems, geniuses prevent them.”—Einstein
I think AI builders need to do like the geniuses. Prevent people from WANTING to take over the AI or prevent a takeover from being a big deal. There is nothing you can do guarantee it won’t be taken over, not with the level of certainty that we’d need to have for a problem that could be this devastating to the entire world. The only way this much power can exist, and be safe from humans, is to address the human element itself.