There are a few ideas that seem obvious to me, but which seem, perplexingly, to elude (or simply not to have occurred to) many folks I consider quite intelligent. One of these ideas is:
We can widen our circle of concern. But there’s no reason we must do so; and, indeed, there is—by definition!—no moral obligation to do it. (In fact, it may even be morally blameworthy to do so, in much the same way that it is morally blameworthy to take a pill that turns you into a murdering psychopath, if you currently believe that murder is immoral.)
It would be good[1], I think, for many in the rationalist community to substantially narrow their circles of moral concern (or, to be more precise, to shift from a simple “circle” of concern to a more complex model based on concentric circles / gradients).
[1] Here take “good” to mean something like “in accord with personal extrapolated volition”.
I agree with this, and also think it’s not obvious we should widen our moral circles. I do think that when many people reflect on their preferences, they will find that they do want to expand their moral circle, though it’s also not obvious that they would come to the opinion that they should not, if they first heard some arguments against it.
I would be curious about hearing more detail about your reasoning for why narrowing their moral circles has a good chance of being more in accordance with their CEV for many people.
I would be curious about hearing more detail about your reasoning for why narrowing their moral circles has a good chance of being more in accordance with their CEV for many people.
I will try and take some time to formulate my thoughts and post a useful response, but for now, a terminological quibble which I think is relevant:
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)
I left out the ‘C’ part of ‘CEV’ deliberately. Whether our preferences and values cohere or not is obviously of great interest to the FAI builder, but it isn’t to me (in this context). I was, deliberately, referring only to the personal values and preferences of people in our community (and, perhaps, beyond). My intent was to refer not to what anyone prefers or values now; nor, on the other hand, to what they “should” prefer or value, on the basis of some interpersonal aggregation (such as, for instance, the oft-cited “judgment of history”, a.k.a. “trans-temporal peer pressure”); but rather, to what they, themselves, would prefer and value, if they learned more, considered more, etc. (In short, I am referring to the “best curve”—in Hanson’s “curve-fitting” model—that represents a person’s “ideal”, in some sense, morality.)
“Personal extrapolated volition” seems like as good a term as any. If there’s existing terminology I should be using instead, I’m open to suggestions.
Seems like a reasonable quibble. I tend to use CEV to also refer to personal extrapolation, where I tend to have similar uncertainty to whether the values of a single person will cohere, as I have about whether the values of multiple people will cohere, but it seems reasonable to still have different words to refer to the different processes. PEV does seem as good as any.
People have been using CEV to refer to both “Personal CEV” and “Global CEV” for a long time—e.g., in the 2013 MIRI paper “Ideal Advisor Theories and Personal CEV.”
I don’t know of any cases of Eliezer using “CEV” in a way that’s clearly inclusive of “Personal” CEV; he generally seems to be building into the notion of “coherence” the idea of coherence between different people. On the other hand, it seems a bit arbitrary to say that something should count as CEV if two human beings are involved, but shouldn’t count as CEV if one human being is involved, given that human individuals aren’t perfectly rational, integrated, unitary agents. (And if two humans is too few, it’s hard to say how many humans should be required before it’s “really” CEV.)
Eliezer’s original CEV paper did on one occasion use “coherence” to refer to intra-agent conflicts:
When people know enough, are smart enough, experienced enough, wise enough, that their volitions are not so incoherent with their decisions, their direct vote could determine their volition. If you look closely at the reason why direct voting is a bad idea, it’s that people’s decisions are incoherent with their volitions.
(This isn’t an unrealistic example. Numerous experiments in behavioral economics demonstrate exactly this sort of circular preference. For instance, you can arrange 3 items such that each pair of them brings a different salient quality into focus for comparison.)
One may worry that we couldn’t ‘coherently extrapolate the volition’ of somebody with these pizza preferences, since these local choices obviously aren’t consistent with any coherent utility function. But how could we help somebody with a pizza preference like this?
I think that absent more arguing about why this is a bad idea, I’ll probably go on using “CEV” to refer to several different things, mostly relying on context to make it clear which version of “CEV” I’m talking about, and using “Personal CEV” or “Global CEV” when it’s really essential to disambiguate.
On the other hand, it seems a bit arbitrary to say that something should count as CEV if two human beings are involved, but shouldn’t count as CEV if one human being is involved, given that human individuals aren’t perfectly rational, integrated, unitary agents. (And if two humans is too few, it’s hard to say how many humans should be required before it’s “really” CEV.)
Conversely, it seems odd to me to select / construct our terminology on the basis of questionable—and, more importantly, controversial—frameworks/views like the idea that it makes sense to view a human as some sort of multiplicity of agents.
The standard, “naive” view is that 1 person = 1 agent. I don’t see any reason not to say, nor anything odd about saying, that the concept of “CEV” applies when, and only when, we’re talking about two or more people. One person: personal extrapolated volition. Two people, three people, twelve million people, etc.: coherent extrapolated volition.
You could think of CEV applied to a single unitary agent as a special case where achieving coherence is trivial. It’s an edge case where the problem becomes easier, rather than an edge case where the concepts threaten to break.
Although this terminology makes it harder to talk about several agents who each separately have their own extrapolated volition (as you were trying to do in your original comment in this thread). Though replacing it with Personal Extrapolated Volition only helps a little, if we also want to talk about several separately groups who each have their own within-group extrapolated volition (which is coherent within each group but not between groups).
Yes, as you noted, I used “personal extrapolated volition” because the use case called for it. It seems to me that the existence of use cases that call for a term (in order to have clarity) is, in fact, the reason to have that term.
If it were up to me, I’d use “CEV” to refer to the proposal Eliezer calls “CEV” in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use “extrapolated volition” to refer to the more general class of algorithms that extrapolate people’s volitions, and use something like “true preferences” or “ideal preferences” or “preferences on reflection” when the algorithm for finding those preferences isn’t important, like in the OP.
If I’m not mistaken, “CEV” originally stood for “Collective Extrapolated Volition”, but then Eliezer changed the name when people interpreted it in more of a “tyranny of the majority” way than he intended.
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)
I’ll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn’t claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that’s interleaved with the extrapolation step instead of happening after the extrapolation step is over.
Yours is exactly the kind of comment that I specifically hoped would not get made, and which I therefore explicitly requested that people restrain themselves from making.
It didn’t look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn’t among the aspects you were hoping people wouldn’t comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.
There are a few ideas that seem obvious to me, but which seem, perplexingly, to elude (or simply not to have occurred to) many folks I consider quite intelligent. One of these ideas is:
We can widen our circle of concern. But there’s no reason we must do so; and, indeed, there is—by definition!—no moral obligation to do it. (In fact, it may even be morally blameworthy to do so, in much the same way that it is morally blameworthy to take a pill that turns you into a murdering psychopath, if you currently believe that murder is immoral.)
It would be good[1], I think, for many in the rationalist community to substantially narrow their circles of moral concern (or, to be more precise, to shift from a simple “circle” of concern to a more complex model based on concentric circles / gradients).
[1] Here take “good” to mean something like “in accord with personal extrapolated volition”.
I agree with this, and also think it’s not obvious we should widen our moral circles. I do think that when many people reflect on their preferences, they will find that they do want to expand their moral circle, though it’s also not obvious that they would come to the opinion that they should not, if they first heard some arguments against it.
I would be curious about hearing more detail about your reasoning for why narrowing their moral circles has a good chance of being more in accordance with their CEV for many people.
I will try and take some time to formulate my thoughts and post a useful response, but for now, a terminological quibble which I think is relevant:
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)
I left out the ‘C’ part of ‘CEV’ deliberately. Whether our preferences and values cohere or not is obviously of great interest to the FAI builder, but it isn’t to me (in this context). I was, deliberately, referring only to the personal values and preferences of people in our community (and, perhaps, beyond). My intent was to refer not to what anyone prefers or values now; nor, on the other hand, to what they “should” prefer or value, on the basis of some interpersonal aggregation (such as, for instance, the oft-cited “judgment of history”, a.k.a. “trans-temporal peer pressure”); but rather, to what they, themselves, would prefer and value, if they learned more, considered more, etc. (In short, I am referring to the “best curve”—in Hanson’s “curve-fitting” model—that represents a person’s “ideal”, in some sense, morality.)
“Personal extrapolated volition” seems like as good a term as any. If there’s existing terminology I should be using instead, I’m open to suggestions.
Seems like a reasonable quibble. I tend to use CEV to also refer to personal extrapolation, where I tend to have similar uncertainty to whether the values of a single person will cohere, as I have about whether the values of multiple people will cohere, but it seems reasonable to still have different words to refer to the different processes. PEV does seem as good as any.
FWIW it’s a pet peeve of mine when people use CEV to refer to personal extrapolated volition—it makes a complicated concept harder to refer to.
People have been using CEV to refer to both “Personal CEV” and “Global CEV” for a long time—e.g., in the 2013 MIRI paper “Ideal Advisor Theories and Personal CEV.”
I don’t know of any cases of Eliezer using “CEV” in a way that’s clearly inclusive of “Personal” CEV; he generally seems to be building into the notion of “coherence” the idea of coherence between different people. On the other hand, it seems a bit arbitrary to say that something should count as CEV if two human beings are involved, but shouldn’t count as CEV if one human being is involved, given that human individuals aren’t perfectly rational, integrated, unitary agents. (And if two humans is too few, it’s hard to say how many humans should be required before it’s “really” CEV.)
Eliezer’s original CEV paper did on one occasion use “coherence” to refer to intra-agent conflicts:
See also Eliezer’s CEV Arbital article:
I think that absent more arguing about why this is a bad idea, I’ll probably go on using “CEV” to refer to several different things, mostly relying on context to make it clear which version of “CEV” I’m talking about, and using “Personal CEV” or “Global CEV” when it’s really essential to disambiguate.
Conversely, it seems odd to me to select / construct our terminology on the basis of questionable—and, more importantly, controversial—frameworks/views like the idea that it makes sense to view a human as some sort of multiplicity of agents.
The standard, “naive” view is that 1 person = 1 agent. I don’t see any reason not to say, nor anything odd about saying, that the concept of “CEV” applies when, and only when, we’re talking about two or more people. One person: personal extrapolated volition. Two people, three people, twelve million people, etc.: coherent extrapolated volition.
Anything other than this, I’d call “arbitrary”.
You could think of CEV applied to a single unitary agent as a special case where achieving coherence is trivial. It’s an edge case where the problem becomes easier, rather than an edge case where the concepts threaten to break.
Although this terminology makes it harder to talk about several agents who each separately have their own extrapolated volition (as you were trying to do in your original comment in this thread). Though replacing it with Personal Extrapolated Volition only helps a little, if we also want to talk about several separately groups who each have their own within-group extrapolated volition (which is coherent within each group but not between groups).
Yes, as you noted, I used “personal extrapolated volition” because the use case called for it. It seems to me that the existence of use cases that call for a term (in order to have clarity) is, in fact, the reason to have that term.
If it were up to me, I’d use “CEV” to refer to the proposal Eliezer calls “CEV” in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use “extrapolated volition” to refer to the more general class of algorithms that extrapolate people’s volitions, and use something like “true preferences” or “ideal preferences” or “preferences on reflection” when the algorithm for finding those preferences isn’t important, like in the OP.
If I’m not mistaken, “CEV” originally stood for “Collective Extrapolated Volition”, but then Eliezer changed the name when people interpreted it in more of a “tyranny of the majority” way than he intended.
I’ll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn’t claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that’s interleaved with the extrapolation step instead of happening after the extrapolation step is over.
Yes. I know.
Yours is exactly the kind of comment that I specifically hoped would not get made, and which I therefore explicitly requested that people restrain themselves from making.
It didn’t look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn’t among the aspects you were hoping people wouldn’t comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.