I’m still up in the air regarding Eliezer’s arguments about CEV.
I have all kinds of ugh-factors coming in mind about not-good or at least not-‘PeterisP-good’ issues an aggregate of 6 billion hairless ape opinions would contain.
The ‘Extrapolated’ part is supposed to solve that; but in that sense I’d say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) is much smaller than the difference between volition of Random Joe and the extrapolated volition of Random Joe ‘if he knew more, thought faster, was more the person he wishes he was’. Ergo, the idealistic CEV version of ‘asking everyone’ seems a bit futile. I could go into more detail, but in that case that’s probably material for a separate discussion, analyzing the parts of CEV point by point.
the idealistic CEV version of ‘asking everyone’ seems a bit futile.
As I see it, adherence to the forms of democracy is important primarily for political reasons—it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the ‘best’ compromise (best by some metric).
Also, as I see it, it is a temporary compromise. We don’t just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.
In that sense, it’s still futile. The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone; the expected result is that AI—either friendly or unfriendly—will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a ‘constitution’ matters only if it is within the goal system of a Friendly AI, otherwise it’s not worth the paper it’s written on.
a ‘constitution’ matters only if it is within the goal system of a Friendly AI
Well, yes. I am assuming that the ‘constitution’ is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI.
The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone.
I wouldn’t say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption—I believe it should be possible to avoid the all-powerful singleton scenario, and create a ‘society’ of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.
I’m still up in the air regarding Eliezer’s arguments about CEV.
I have all kinds of ugh-factors coming in mind about not-good or at least not-‘PeterisP-good’ issues an aggregate of 6 billion hairless ape opinions would contain.
The ‘Extrapolated’ part is supposed to solve that; but in that sense I’d say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) is much smaller than the difference between volition of Random Joe and the extrapolated volition of Random Joe ‘if he knew more, thought faster, was more the person he wishes he was’. Ergo, the idealistic CEV version of ‘asking everyone’ seems a bit futile. I could go into more detail, but in that case that’s probably material for a separate discussion, analyzing the parts of CEV point by point.
As I see it, adherence to the forms of democracy is important primarily for political reasons—it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the ‘best’ compromise (best by some metric).
Also, as I see it, it is a temporary compromise. We don’t just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.
In that sense, it’s still futile. The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone; the expected result is that AI—either friendly or unfriendly—will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a ‘constitution’ matters only if it is within the goal system of a Friendly AI, otherwise it’s not worth the paper it’s written on.
Well, yes. I am assuming that the ‘constitution’ is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI.
I wouldn’t say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption—I believe it should be possible to avoid the all-powerful singleton scenario, and create a ‘society’ of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.