how simple is the simplest animal you’re willing to assign moral worth to?
I don’t value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
If you don’t care about organisms simple enough that they don’t suffer, does it seem “arbitrary” to you to single out a particular mental behavior as being the mental behavior that signifies moral worth?
No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.
If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?
I don’t care about maximizing the amount of morally relevant entities, so this is an unlikely scenario. But I guess the point of your question is whether I am serious about the criteria I’m endorsing. Yes, I am. If my best estimates come out in a way leading to counterintuitive conclusions, and if that remains the case even if I adjust for overconfidence on my part before doing something irreversible, then I would indeed act accordingly.
If you were the only human left on Earth and you couldn’t find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?
The lives of most wild animals involve a lot of suffering already, and at some point, they are likely going to die painfully anyway. It is unclear whether me killing them (assuming I’d even be skilled enough to get one of them) would be net bad. I don’t intrinsically object to beings dying/being killed. But again, if it turns out that some action (e.g. killing myself) is what best fulfills the values I’ve come up with under reflection, I will do that, or, if I’m not mentally capable of doing it, I’d take a pill that would make me capable.
How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?
I don’t know, but I assume that an AI would be able to find a great solution. Maybe through reengineering animals so they become incapable of experiencing suffering, while somehow keeping the function of pain intact. Or maybe simply get rid of Darwinian nature and replace it, if that is deemed necessary, with something artificial and nice.
I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It’s a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?
Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.
There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already.
Another thing to consider is that insects are a diverse bunch. I’m virtually certain that some of them aren’t conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.
There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I’d link his site but it doesn’t seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.
Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you’re even looking for. Nociception seems to be a necessary criterion, but it’s not sufficient. In addition, I suspect that consciousness’ adaptive role has to do with the weighting of different “possible” behaviors, so there has to be some learning behavior or variety in behavioral subroutines.
I actually give some credence to extreme views like Dennett’s (and also Eliezer’s if I’m informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here.
And regarding the number of neurons thing, there I’m basically just going by intuition, which is unfortunate so I should think about this some more.
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let’s hope that what it’s like to be an asphyxiating fish, for example, doesn’t remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
Good point, there is reason to expect that I’m just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then “non-negligible” would indeed be an understatement for insect suffering.
However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you’re primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
Only sentient beings have a first-person point of view, only for them can states of the world be good or bad.
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).
I don’t value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.
I don’t care about maximizing the amount of morally relevant entities, so this is an unlikely scenario. But I guess the point of your question is whether I am serious about the criteria I’m endorsing. Yes, I am. If my best estimates come out in a way leading to counterintuitive conclusions, and if that remains the case even if I adjust for overconfidence on my part before doing something irreversible, then I would indeed act accordingly.
The lives of most wild animals involve a lot of suffering already, and at some point, they are likely going to die painfully anyway. It is unclear whether me killing them (assuming I’d even be skilled enough to get one of them) would be net bad. I don’t intrinsically object to beings dying/being killed. But again, if it turns out that some action (e.g. killing myself) is what best fulfills the values I’ve come up with under reflection, I will do that, or, if I’m not mentally capable of doing it, I’d take a pill that would make me capable.
I don’t know, but I assume that an AI would be able to find a great solution. Maybe through reengineering animals so they become incapable of experiencing suffering, while somehow keeping the function of pain intact. Or maybe simply get rid of Darwinian nature and replace it, if that is deemed necessary, with something artificial and nice.
A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It’s a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?
I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers.
Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.
There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
Very interesting. What were they experts in? And how many people responded?
They were experts in pain perception and related fields. We sent the survey to about 25 people, of whom 13 responded.
Added (6 November, 2015): If there is interest, I can reconstruct the list of experts we contacted. Just let me know.
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already.
Another thing to consider is that insects are a diverse bunch. I’m virtually certain that some of them aren’t conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.
There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I’d link his site but it doesn’t seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.
Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
Thanks.
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you’re even looking for. Nociception seems to be a necessary criterion, but it’s not sufficient. In addition, I suspect that consciousness’ adaptive role has to do with the weighting of different “possible” behaviors, so there has to be some learning behavior or variety in behavioral subroutines.
I actually give some credence to extreme views like Dennett’s (and also Eliezer’s if I’m informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here.
And regarding the number of neurons thing, there I’m basically just going by intuition, which is unfortunate so I should think about this some more.
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let’s hope that what it’s like to be an asphyxiating fish, for example, doesn’t remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
OK, thanks for clarifying.
Good point, there is reason to expect that I’m just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then “non-negligible” would indeed be an understatement for insect suffering.
However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you’re primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).