If the builders have increased their intelligence levels that high, then other people of that time will be able to do the same and therefore potentially crack the AI.
Two:
Also, I may as well point out that your argument is based on the assumption that enough intelligence will make for perfect security. It may be that no matter how intelligent the designers are, their security plans are not perfect. Perfect security looks to be about as likely, to me, as perpetual motion is. No matter how much intelligence you throw at it, you won’t get a perpetual motion machine. We’d need to discover some paradigm shattering physics information for that to be so. I suppose it is possible that someone will shatter the physics paradigms by discovering new information, but that’s not something to count on to build a perpetual motion machine, especially when you’re counting on the perpetual motion machine to keep the world safe.
Three:
Whenever humans have tried to collect too much power into one place, it has not worked out for them. For instance, communism in Russia. They thought they’d share all the money by letting one group distribute it. That did not work.
The founding fathers of the USA insisted on checking and balancing the government’s power. Surely you are aware of the reasons for that.
If the builders are the only ones in the world with intelligence levels that high, the power of that may corrupt them, and they may make a pact to usurp the AI themselves.
Four:
There may be unexpected thoughts you encounter in that position that seem to justify taking advantage of the situation. For instance, before becoming a jailor, you would assume you’re going to be ethical and fair. In that situation, though, people change. (See also: Zimbardo’s Stanford prison experiment).
Why do they change? I imagine the reasoning goes a little like this: “Great I’m in control. Oh, wait. Everyone wants to get out. Okay. And they’re a threat to me because I’m keeping them in here. I’m going to get into a lot of power struggles in this job. Considering that even if I fail only 1% of the time, the consequences of failing at a power struggle are very dire, so I should probably err on the side of caution—use too much force rather than too little. And if it’s okay to use physical force, then how bad is using a little psychological oppression as a deterrent? That will be a bit of extra security for me and help me maintain order in this jail. Considering the serious risk, and the high chance of injury, it’s necessary to use everything I’ve got.”
We don’t know what kinds of reasoning processes the AI builders will get into at that time. They might be thinking like this:
“We’re going to make the most powerful thing in the world, yay! But wait, everyone else wants it. They’re trying to hack us, spy on us… there are people out there who would kidnap us and torture us to get a hold of this information. They might do all kinds of horrible things to us. Oh my goodness and they’re not going to stop trying to hack us when we’re done. Our information will still be valuable. I could get kidnapped years from now and be tortured for this information then. I had better give myself some kind of back door into the AI, something that will make it protect me when I need it. (A month later) Well… surely it’s justified to use the back door for this one thing… and maybe for that one thing, too… man I’ve got threats all over me, if I don’t do this perfectly, I’ll probably fail … even if I only make a mistake 1 in 100 times, that could be devastating. (Begins using the back door all the time.) And I’m important. I’m working on the most powerful AI. I’m needed to make a difference in the world. I had better protect myself and err on the side of caution. I could do these preventative things over here… people won’t like the limits I place on them, but the pros outweigh the cons, so: oppress.” The limits may be seen as evidence that the AI builders cannot be trusted (regardless of how justified they are, there will be some group of people who feels oppressed by new limits, possibly irrational people or possibly people who see a need for the freedom that the AI builders don’t) and if a group of people are angry about the limits, they will then be opposed to the AI builders. If they begin to resist the AI builders, the AI builders will be forced to increase security, which may oppress them further. This could be a feedback loop that gets out of hand: Increasing resistance to the AI builders justifies increasing oppression, and increasing oppression justifies increasing resistance.
This is how an AI builder could turn into a jailor.
If part of the goal is to create an AI that will enforce laws, the AI researchers will be part of the penal system, literally. We could be setting ourselves up for the world’s most spectacular prison experiment.
No go. Four reasons.
One:
If the builders have increased their intelligence levels that high, then other people of that time will be able to do the same and therefore potentially crack the AI.
Two:
Also, I may as well point out that your argument is based on the assumption that enough intelligence will make for perfect security. It may be that no matter how intelligent the designers are, their security plans are not perfect. Perfect security looks to be about as likely, to me, as perpetual motion is. No matter how much intelligence you throw at it, you won’t get a perpetual motion machine. We’d need to discover some paradigm shattering physics information for that to be so. I suppose it is possible that someone will shatter the physics paradigms by discovering new information, but that’s not something to count on to build a perpetual motion machine, especially when you’re counting on the perpetual motion machine to keep the world safe.
Three:
Whenever humans have tried to collect too much power into one place, it has not worked out for them. For instance, communism in Russia. They thought they’d share all the money by letting one group distribute it. That did not work.
The founding fathers of the USA insisted on checking and balancing the government’s power. Surely you are aware of the reasons for that.
If the builders are the only ones in the world with intelligence levels that high, the power of that may corrupt them, and they may make a pact to usurp the AI themselves.
Four:
There may be unexpected thoughts you encounter in that position that seem to justify taking advantage of the situation. For instance, before becoming a jailor, you would assume you’re going to be ethical and fair. In that situation, though, people change. (See also: Zimbardo’s Stanford prison experiment).
Why do they change? I imagine the reasoning goes a little like this: “Great I’m in control. Oh, wait. Everyone wants to get out. Okay. And they’re a threat to me because I’m keeping them in here. I’m going to get into a lot of power struggles in this job. Considering that even if I fail only 1% of the time, the consequences of failing at a power struggle are very dire, so I should probably err on the side of caution—use too much force rather than too little. And if it’s okay to use physical force, then how bad is using a little psychological oppression as a deterrent? That will be a bit of extra security for me and help me maintain order in this jail. Considering the serious risk, and the high chance of injury, it’s necessary to use everything I’ve got.”
We don’t know what kinds of reasoning processes the AI builders will get into at that time. They might be thinking like this:
“We’re going to make the most powerful thing in the world, yay! But wait, everyone else wants it. They’re trying to hack us, spy on us… there are people out there who would kidnap us and torture us to get a hold of this information. They might do all kinds of horrible things to us. Oh my goodness and they’re not going to stop trying to hack us when we’re done. Our information will still be valuable. I could get kidnapped years from now and be tortured for this information then. I had better give myself some kind of back door into the AI, something that will make it protect me when I need it. (A month later) Well… surely it’s justified to use the back door for this one thing… and maybe for that one thing, too… man I’ve got threats all over me, if I don’t do this perfectly, I’ll probably fail … even if I only make a mistake 1 in 100 times, that could be devastating. (Begins using the back door all the time.) And I’m important. I’m working on the most powerful AI. I’m needed to make a difference in the world. I had better protect myself and err on the side of caution. I could do these preventative things over here… people won’t like the limits I place on them, but the pros outweigh the cons, so: oppress.” The limits may be seen as evidence that the AI builders cannot be trusted (regardless of how justified they are, there will be some group of people who feels oppressed by new limits, possibly irrational people or possibly people who see a need for the freedom that the AI builders don’t) and if a group of people are angry about the limits, they will then be opposed to the AI builders. If they begin to resist the AI builders, the AI builders will be forced to increase security, which may oppress them further. This could be a feedback loop that gets out of hand: Increasing resistance to the AI builders justifies increasing oppression, and increasing oppression justifies increasing resistance.
This is how an AI builder could turn into a jailor.
If part of the goal is to create an AI that will enforce laws, the AI researchers will be part of the penal system, literally. We could be setting ourselves up for the world’s most spectacular prison experiment.
Checks and balances, Wei_Dai.