Well, if people become sufficiently convinced that deploying a technology would be a really bad idea and not in anyone’s best interest, they can refrain from deploying it. No one has used nuclear weapons in war since WWII, after all.
Of course, it would take some pretty strong evidence for that to happen. But, hypothetically speaking, if we created a non-self improving oracle AI and asked it “how can we do an intelligence explosion without killing ourselves”, and it tells us “Sorry, you can’t, there’s no way”, then we’d have to try to convince everyone to not “push the button”.
If we had a superintelligent Oracle, we could just ask it what the maximally persuasive argument for not making AIs was and hook it up to some kind of broadcast.
If, on the other hand, this is some sort of single-function Oracle, I don’t think we’re capable of preventing our extinction in that case. Maybe if we managed to become a singleton somehow; if you know how to do that I have some friends who would be interested in your ideas.
What if, again hypothetically speaking, Eliezer and his group while working on friendly AI theory proved mathematically beyond the shadow of a doubt that any intelligence explosion would end badly, and that friendly AI was impossible. While he doesn’t like it, being a rationalist, he accepts it once there is no other rational alternative. He publishes these results, experts all over the world look at them, check them, and sadly agree that he was right.
Do you think any major organization with enough resources and manpower to create an AI would still do so if they knew that it would result in their own horrible deaths? I think the example of nuclear weapons shows that it’s at least possible that people may refrain from an action if they understand that it’s a no-win scenario for them.
This is all just hypothetical, mind you; I’m not really convinced that “AI goes foom” is all that likely a scenario in the first place, and if it was I don’t see any reason that friendly AI of one type or another wouldn’t be possible; but if it actually wasn’t, then that may very well be enough to stop people, so long as that fact could be demonstrated to everyone’s satisfaction.
Well, if people become sufficiently convinced that deploying a technology would be a really bad idea and not in anyone’s best interest, they can refrain from deploying it. No one has used nuclear weapons in war since WWII, after all.
Of course, it would take some pretty strong evidence for that to happen. But, hypothetically speaking, if we created a non-self improving oracle AI and asked it “how can we do an intelligence explosion without killing ourselves”, and it tells us “Sorry, you can’t, there’s no way”, then we’d have to try to convince everyone to not “push the button”.
If we had a superintelligent Oracle, we could just ask it what the maximally persuasive argument for not making AIs was and hook it up to some kind of broadcast.
If, on the other hand, this is some sort of single-function Oracle, I don’t think we’re capable of preventing our extinction in that case. Maybe if we managed to become a singleton somehow; if you know how to do that I have some friends who would be interested in your ideas.
Well, the oracle was just an example.
What if, again hypothetically speaking, Eliezer and his group while working on friendly AI theory proved mathematically beyond the shadow of a doubt that any intelligence explosion would end badly, and that friendly AI was impossible. While he doesn’t like it, being a rationalist, he accepts it once there is no other rational alternative. He publishes these results, experts all over the world look at them, check them, and sadly agree that he was right.
Do you think any major organization with enough resources and manpower to create an AI would still do so if they knew that it would result in their own horrible deaths? I think the example of nuclear weapons shows that it’s at least possible that people may refrain from an action if they understand that it’s a no-win scenario for them.
This is all just hypothetical, mind you; I’m not really convinced that “AI goes foom” is all that likely a scenario in the first place, and if it was I don’t see any reason that friendly AI of one type or another wouldn’t be possible; but if it actually wasn’t, then that may very well be enough to stop people, so long as that fact could be demonstrated to everyone’s satisfaction.