the idealistic CEV version of ‘asking everyone’ seems a bit futile.
As I see it, adherence to the forms of democracy is important primarily for political reasons—it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the ‘best’ compromise (best by some metric).
Also, as I see it, it is a temporary compromise. We don’t just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.
In that sense, it’s still futile. The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone; the expected result is that AI—either friendly or unfriendly—will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a ‘constitution’ matters only if it is within the goal system of a Friendly AI, otherwise it’s not worth the paper it’s written on.
a ‘constitution’ matters only if it is within the goal system of a Friendly AI
Well, yes. I am assuming that the ‘constitution’ is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI.
The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone.
I wouldn’t say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption—I believe it should be possible to avoid the all-powerful singleton scenario, and create a ‘society’ of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.
As I see it, adherence to the forms of democracy is important primarily for political reasons—it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the ‘best’ compromise (best by some metric).
Also, as I see it, it is a temporary compromise. We don’t just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.
In that sense, it’s still futile. The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone; the expected result is that AI—either friendly or unfriendly—will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a ‘constitution’ matters only if it is within the goal system of a Friendly AI, otherwise it’s not worth the paper it’s written on.
Well, yes. I am assuming that the ‘constitution’ is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI.
I wouldn’t say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption—I believe it should be possible to avoid the all-powerful singleton scenario, and create a ‘society’ of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.