2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
I actually don’t object to animals being killed, I’m just concerned about their suffering.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
There is not enough time for having fun for these animals
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
it still seems highly likely that suffering dominates for wild animals
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed?
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.