This perspective puzzled me for a moment. It puzzled me, not because Istvan is necessarily wrong, but because his concerns seem so irrelevant. For me, the sentence “superintelligence will belong to the US”, takes a while to parse because it doesn’t even type-check. Superintelligence will be enough of a game-changer that nations will mean something very different from what they do now, if they exist at all.
Istvan seems like someone modeling the Internet by thinking of a postal system, and then imagining it running really really fast.
Now, a more charitable reading would interpret his AI as some sort of superhuman but non-FOOMed tool AI, in which case his concerns make a bit more sense. But even in this case, this seems pretty much irrelevant. The US couldn’t keep nuclear secrets from the Russians in the 50′s, and this was before the Internet.
What if another national power told that superintelligence to break all the secret codes and classified material that America’s CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country?
makes me think Istvan really doesn’t understand what a “superintelligence” (or an “intelligence”) is.
To give him the benefit of doubt, he might be choosing his arguments based on what he expects his readers to understand. Skimming some of the comments to that article suggests that even this simplified example might have been of excessive inferential distance to some readers.
The “let’s hope the first superintelligent belongs to the US” could be steelmanned as “let’s hope that the values of the first superintelligence are based on those of Americans rather than the Chinese”, which seems reasonable given that there’s no guarantee that people from different cultural groups would have compatible values. (Of course, this still leaves the problem that I’d expect there to be plenty of people even within the US who had incompatible values...)
This perspective puzzled me for a moment. It puzzled me, not because Istvan is necessarily wrong, but because his concerns seem so irrelevant. For me, the sentence “superintelligence will belong to the US”, takes a while to parse because it doesn’t even type-check. Superintelligence will be enough of a game-changer that nations will mean something very different from what they do now, if they exist at all.
Istvan seems like someone modeling the Internet by thinking of a postal system, and then imagining it running really really fast.
Now, a more charitable reading would interpret his AI as some sort of superhuman but non-FOOMed tool AI, in which case his concerns make a bit more sense. But even in this case, this seems pretty much irrelevant. The US couldn’t keep nuclear secrets from the Russians in the 50′s, and this was before the Internet.
Agree. In particular, this passage here
makes me think Istvan really doesn’t understand what a “superintelligence” (or an “intelligence”) is.
To give him the benefit of doubt, he might be choosing his arguments based on what he expects his readers to understand. Skimming some of the comments to that article suggests that even this simplified example might have been of excessive inferential distance to some readers.
The “let’s hope the first superintelligent belongs to the US” could be steelmanned as “let’s hope that the values of the first superintelligence are based on those of Americans rather than the Chinese”, which seems reasonable given that there’s no guarantee that people from different cultural groups would have compatible values. (Of course, this still leaves the problem that I’d expect there to be plenty of people even within the US who had incompatible values...)
It means when the superintelligence starts converting people to paperclips, for sentimental reasons the Americans will be the last ones converted.
Of course, unless it conflicts with some more important objective, such as making more paperclips.