If I’m Dr. Evil and I use it, won’t you be empowering me?
Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which… transforms the grey goo into a nicer color of goo, I guess?
If you don’t believe that a foom is the most likely outcome(a common and not unreasonable position) then it’s probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
That interview is indeed worrying. I’m surprised by some of the answers.
Like this?
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which… transforms the grey goo into a nicer color of goo, I guess?
If you don’t believe that a foom is the most likely outcome(a common and not unreasonable position) then it’s probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
Even in that case, whichever actor has the most processors would have the largest “AI farm”, with commensurate power projection.
I think the second one suggests that they don’t believe the future AI will be a singleton.