1) This is indeed an important consideration, although I think for most people the inconveniences would only present themselves during the transition phase. Once you get used to it sufficiently and if you live somewhere with lots of tasty veg*an food options, it might not be a problem anymore. Also, in the social context, being a vegetarian can be a good conversation starter which one can use to steer the conversation towards whatever ethical issues one considers most important. (“I’m not just concerned about personal purity, I also want to actively prevent suffering. For instance...”)
I suspect paying others to go veg*an for you might indeed be more effective, but especially for people who serve as social role models, personal choices may be very important as well, up to the point of being dominant.
2) Yeah but how is the AI going to care about non-human suffering if few humans (and, it seems to me, few people working on fAI) take it seriously?
3)-5) These are reasons for some probabilistic discounting, and then the question becomes whether it’s significant enough. They don’t strike me as too strong but this is worthy of discussion. Personally I never found 4. convincing at all but I’m curious as to whether people have arguments for this type of position that I’m not yet aware of.
1) I agree that being a good role model is an important consideration, especially if you’re a good spokesperson or are just generally very social. To many liberals and EA folks, vegetarianism signals ethical consistency, felt compassion, and a commitment to following through on your ideals.
I’m less convinced that vegetarianism only has opportunity costs during transition. I’m sure it becomes easier, but it might still be a significant drain, depending on your prior eating and social habits. Of course, this doesn’t matter as much if you aren’t involved in EA, or are involved in relatively low-priority EA.
(I’d add that vegetarianism might also make you better Effective Altruist in general, via virtue-ethics-style psychological mechanisms. I think this is one of the very best arguments for vegetarianism, though it may depend on the psychology and ethical code of each individual EAist.)
2) Coherent extrapolated volition. We aren’t virtuous enough to make healthy, scalable, sustainable economic decisions, but we wish we were.
3)-5) I agree that 4) doesn’t persuade me much, but it’s very interesting, and I’d like to hear it defended in more detail with a specific psychological model of what makes humans moral patients. 3) I think is a much more serious and convincing argument; indeed, it convinces me that at least some animals with complex nervous systems and damage-avoiding behavior do not suffer. Though my confidence is low enough that I’d probably still consider it immoral to, say, needlessly torture large numbers of insects.
2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
I actually don’t object to animals being killed, I’m just concerned about their suffering.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
There is not enough time for having fun for these animals
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
it still seems highly likely that suffering dominates for wild animals
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed?
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
CEV asks what humans would value if their knowledge and rationality were vastly greater. I don’t find it implausible that if we knew more about the neural underpinnings of our own suffering and pleasure, knew more about the neurology of non-humans, and were more rational and internally consistent in relating this knowledge to our preferences, then our preferences would assign at least some moral weight to the well-being of non-sapients, independent of whether that well-being impacts any sapient.
As a simpler base case: I think the CEV of 19th-century slave-owners in the American South would have valued black and white people effectively equally. Do we at least agree about that much?
I don’t know much about CEV (I started to read Eliezer’s paper but I didn’t get very far), but I’m not sure it’s possible to extrapolate values like that. What if 19th-century slave owners hold white-people-are-better as a terminal value?
On the other hand, it does seem plausible that slave owner would oppose slavery if he weren’t himself a slave owner, so his CEV may indeed support racial equality. I simply don’t know enough about CEV or how to implement it to make a judgment one way or the other.
Terminal values can change with education. Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality. For instance, slave-owners don’t don’t on any deep level value consistency between their moral intuitions, or they assign zero weight to moral intuitions involving empathy.
If new experiences and rationality training couldn’t ever persuade a slave-owner to become an egalitarian, then I’m extremely confused by the fact that society has successfully eradicated the memes that restructured those slave-owners’ brains so quickly. Maybe I’m just more sanguine than most people about the possibility that new information can actually change people’s minds (including their values). Science doesn’t progress purely via the eradication of previous generations.
I’m not sure I’d agree with that framing. If an ethical feature changes with education, that’s good evidence that it’s not a terminal value, to whatever extent that it makes sense to talk about terminal values in humans. Which may very well be “not very much”; our value structure is a lot messier than that of the theoretical entities for which the terminal/instrumental dichotomy works well, and if we had a good way of cleaning it up we wouldn’t need proposals like CEV.
People can change between egalitarian and hierarchical ethics without neurological insults or biochemical tinkering, so human “terminal” values clearly don’t necessitate one or the other. More importantly, though, CEV is not magic; it can resolve contradictions between the ethics you feed into it, and it might be able to find refinements of those ethics that our biases blind us to or that we’re just not smart enough to figure out, but it’s only as good as its inputs. In particular, it’s not guaranteed to find universal human values when evaluated over a subset of humanity.
If you took a collection of 19th-century slave owners and extrapolated their ethical preferences according to CEV-like rules, I wouldn’t expect that to spit out an ethic that allowed slavery—the historical arguments I’ve read for the practice didn’t seem very good—but I wouldn’t be hugely surprised if it did, either. Either way it wouldn’t imply that the resulting ethic applies to all humans or that it derives from immutable laws of rationality; it’d just tell us whether it’s possible to reconcile slavery with middle-and-upper-class 19th-century ethics without downstream contradictions.
“Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality.”
Could you elaborate on this please? If you’re saying what I think you’re saying then I would strongly like to argue against your point.
I think the word “pain” is misleading. What I care about precisely is suffering, defined as a conscious state a being wants to get out of. If you don’t dislike it and don’t have an urge to make it stop, it’s not suffering. This is also why I think the “pain” of people with pain asymbolia is not morally bad.
Good points!
1) This is indeed an important consideration, although I think for most people the inconveniences would only present themselves during the transition phase. Once you get used to it sufficiently and if you live somewhere with lots of tasty veg*an food options, it might not be a problem anymore. Also, in the social context, being a vegetarian can be a good conversation starter which one can use to steer the conversation towards whatever ethical issues one considers most important. (“I’m not just concerned about personal purity, I also want to actively prevent suffering. For instance...”)
I suspect paying others to go veg*an for you might indeed be more effective, but especially for people who serve as social role models, personal choices may be very important as well, up to the point of being dominant.
2) Yeah but how is the AI going to care about non-human suffering if few humans (and, it seems to me, few people working on fAI) take it seriously?
3)-5) These are reasons for some probabilistic discounting, and then the question becomes whether it’s significant enough. They don’t strike me as too strong but this is worthy of discussion. Personally I never found 4. convincing at all but I’m curious as to whether people have arguments for this type of position that I’m not yet aware of.
1) I agree that being a good role model is an important consideration, especially if you’re a good spokesperson or are just generally very social. To many liberals and EA folks, vegetarianism signals ethical consistency, felt compassion, and a commitment to following through on your ideals.
I’m less convinced that vegetarianism only has opportunity costs during transition. I’m sure it becomes easier, but it might still be a significant drain, depending on your prior eating and social habits. Of course, this doesn’t matter as much if you aren’t involved in EA, or are involved in relatively low-priority EA.
(I’d add that vegetarianism might also make you better Effective Altruist in general, via virtue-ethics-style psychological mechanisms. I think this is one of the very best arguments for vegetarianism, though it may depend on the psychology and ethical code of each individual EAist.)
2) Coherent extrapolated volition. We aren’t virtuous enough to make healthy, scalable, sustainable economic decisions, but we wish we were.
3)-5) I agree that 4) doesn’t persuade me much, but it’s very interesting, and I’d like to hear it defended in more detail with a specific psychological model of what makes humans moral patients. 3) I think is a much more serious and convincing argument; indeed, it convinces me that at least some animals with complex nervous systems and damage-avoiding behavior do not suffer. Though my confidence is low enough that I’d probably still consider it immoral to, say, needlessly torture large numbers of insects.
2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
I don’t understand how CEV would be capable of deducing that non-human animals have moral value purely from current human values.
CEV asks what humans would value if their knowledge and rationality were vastly greater. I don’t find it implausible that if we knew more about the neural underpinnings of our own suffering and pleasure, knew more about the neurology of non-humans, and were more rational and internally consistent in relating this knowledge to our preferences, then our preferences would assign at least some moral weight to the well-being of non-sapients, independent of whether that well-being impacts any sapient.
As a simpler base case: I think the CEV of 19th-century slave-owners in the American South would have valued black and white people effectively equally. Do we at least agree about that much?
I don’t know much about CEV (I started to read Eliezer’s paper but I didn’t get very far), but I’m not sure it’s possible to extrapolate values like that. What if 19th-century slave owners hold white-people-are-better as a terminal value?
On the other hand, it does seem plausible that slave owner would oppose slavery if he weren’t himself a slave owner, so his CEV may indeed support racial equality. I simply don’t know enough about CEV or how to implement it to make a judgment one way or the other.
Terminal values can change with education. Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality. For instance, slave-owners don’t don’t on any deep level value consistency between their moral intuitions, or they assign zero weight to moral intuitions involving empathy.
If new experiences and rationality training couldn’t ever persuade a slave-owner to become an egalitarian, then I’m extremely confused by the fact that society has successfully eradicated the memes that restructured those slave-owners’ brains so quickly. Maybe I’m just more sanguine than most people about the possibility that new information can actually change people’s minds (including their values). Science doesn’t progress purely via the eradication of previous generations.
I’m not sure I’d agree with that framing. If an ethical feature changes with education, that’s good evidence that it’s not a terminal value, to whatever extent that it makes sense to talk about terminal values in humans. Which may very well be “not very much”; our value structure is a lot messier than that of the theoretical entities for which the terminal/instrumental dichotomy works well, and if we had a good way of cleaning it up we wouldn’t need proposals like CEV.
People can change between egalitarian and hierarchical ethics without neurological insults or biochemical tinkering, so human “terminal” values clearly don’t necessitate one or the other. More importantly, though, CEV is not magic; it can resolve contradictions between the ethics you feed into it, and it might be able to find refinements of those ethics that our biases blind us to or that we’re just not smart enough to figure out, but it’s only as good as its inputs. In particular, it’s not guaranteed to find universal human values when evaluated over a subset of humanity.
If you took a collection of 19th-century slave owners and extrapolated their ethical preferences according to CEV-like rules, I wouldn’t expect that to spit out an ethic that allowed slavery—the historical arguments I’ve read for the practice didn’t seem very good—but I wouldn’t be hugely surprised if it did, either. Either way it wouldn’t imply that the resulting ethic applies to all humans or that it derives from immutable laws of rationality; it’d just tell us whether it’s possible to reconcile slavery with middle-and-upper-class 19th-century ethics without downstream contradictions.
“Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality.”
Could you elaborate on this please? If you’re saying what I think you’re saying then I would strongly like to argue against your point.
You might also like Brian Tomasik’s critique of CEV
Do you think the kind of pain I feel while (say) eating spicy foods is bad whether or not I dislike it?
I think the word “pain” is misleading. What I care about precisely is suffering, defined as a conscious state a being wants to get out of. If you don’t dislike it and don’t have an urge to make it stop, it’s not suffering. This is also why I think the “pain” of people with pain asymbolia is not morally bad.