Politics as a process doesn’t generate values; they’re strictly an input,
Politics is part about choosing goals/values. (E.g., do we value equality or total wealth?) It is also about choosing the means to achieving the goals. And it is also about signaling power. Most of these are not relevant to designing a future Friendly AI.
Yes, a polity is an “optimizer” in some crude sense, optimizing towards a weighted sum of the values of its members with some degree of success. Corporations and economies have also been described as optimizers. But I don’t see too much similarity to AI design here.
I mentally responded to you and forgot to, you know, actually respond.
I’m a bit confused by this and since it was upvoted I’m less sure I get CEV....
It might clear things up to point out that I’m making a distinction between goals or preferences vs. values. CEV could be summarized as “fulfill our ideal rather than actual preferences”, yeah? As in, we could be empirically wrong about what would maximize the things we care about, since we can’t really be wrong about what to care about. So I imagine the AI needing to be programmed with our values- the meta wants that motivate our current preferences- and it would extrapolate from them to come up with better preferences, or at least it seems that way to me. Or does the AI figure that out too somehow? If so, what does an algorithm that figures out our preferences and our values contain?
The motivation behind CEV also includes the idea we might be wrong about what we care about. Instead, you give your FAI an algorithm for
Locating people
Working out what they care about
Working out what they would care about if they knew more, etc.
Combining these preferences
I’m not sure what distinction you’re trying to draw between values and preferences (perhaps a moral vs non-moral one?), but I don’t think it’s relevant to CEV as currently envisioned.
Actually, when I said “most” in “most of these are not relevant to designing a future Friendly AI,” I was thinking that values are the exception, they are relevant.
Politics is part about choosing goals/values. (E.g., do we value equality or total wealth?) It is also about choosing the means to achieving the goals. And it is also about signaling power. Most of these are not relevant to designing a future Friendly AI.
Yes, a polity is an “optimizer” in some crude sense, optimizing towards a weighted sum of the values of its members with some degree of success. Corporations and economies have also been described as optimizers. But I don’t see too much similarity to AI design here.
Deciding what we value isn’t relevant to friendliness? Could you explain that to me?
The whole point of CEV is that we give the AI an algorithm for educing our values, and let it run. At no point do we try to work them out ourselves.
I mentally responded to you and forgot to, you know, actually respond.
I’m a bit confused by this and since it was upvoted I’m less sure I get CEV....
It might clear things up to point out that I’m making a distinction between goals or preferences vs. values. CEV could be summarized as “fulfill our ideal rather than actual preferences”, yeah? As in, we could be empirically wrong about what would maximize the things we care about, since we can’t really be wrong about what to care about. So I imagine the AI needing to be programmed with our values- the meta wants that motivate our current preferences- and it would extrapolate from them to come up with better preferences, or at least it seems that way to me. Or does the AI figure that out too somehow? If so, what does an algorithm that figures out our preferences and our values contain?
Ha, yes, I often do that.
The motivation behind CEV also includes the idea we might be wrong about what we care about. Instead, you give your FAI an algorithm for
Locating people
Working out what they care about
Working out what they would care about if they knew more, etc.
Combining these preferences
I’m not sure what distinction you’re trying to draw between values and preferences (perhaps a moral vs non-moral one?), but I don’t think it’s relevant to CEV as currently envisioned.
Actually, when I said “most” in “most of these are not relevant to designing a future Friendly AI,” I was thinking that values are the exception, they are relevant.
Oh. Then yeah ok I think I agree.