I think it’s fair to use suicide as a benchmark for when someone’s life becomes miserable enough for them to end it.
Yes, but that’s because it’s a tautology!
I don’t think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:
I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide “decisions”.
In the world where there is no built-in prohibition against ending your own life, if the “enjoys life” indicator is at level 10 and the “hates life” indicator is at level 11, then suicide is on the table.
In, what I think is probably our world, when the “enjoys life” indicator is at level 10 the “hates life” indicator has to be at level 50.
What’s more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.
If this holds true, then own-life-valuing indicator addon would only be there for a being that already exists.
This is not to say that we can certainly conclude that animals being farmed don’t actually dislike life more than they enjoy it. This could certainly be the case, and they might just lack the reasoning to commit suicide.
...
Thus I fail to see a strong ethical argument against the eating of animals from this perspective.
Here you’re seemingly willing to acknowledge that it’s at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you’re acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.
Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.
This seems to me similar to the arguments made akin to “why waste money on space telescopes (or whatever) when people are going hungry right here on earth?”.
Neither reducing the suffering of beings that can most certainly suffer and those that might be suffering seems likely to consume all of our suffering-diminishing potential. Maybe we can conclude that the likelihood of farm animals suffering in a way that we should care about is so low as to be worth absolutely no suffering-diminishing potential, but I don’t think you’ve made that case.
In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it’s equivalent to not having existed in the first place.
I don’t think you’ve made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.
You bring up good points, I don’t have time to answer in full, but notes on a few of them to which I can properly retort:
I don’t think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:
I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide “decisions”.
In the world where there is no built-in prohibition against ending your own life, if the “enjoys life” indicator is at level 10 and the “hates life” indicator is at level 11, then suicide is on the table.
In, what I think is probably our world, when the “enjoys life” indicator is at level 10 the “hates life” indicator has to be at level 50.
What’s more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.
This seems to me similar to the arguments made akin to “why waste money on space telescopes (or whatever) when people are going hungry right here on earth?”.
This is not really analogous, in that my example is “potential to reduce suffering” vs “obviously reducing suffering”. A telescope is neither of those, it’s working towards what I’d argue is more of a transcedent goal.
It’s more like arguing “Let’s give homeless people a place to sleep now, rather than focusing on market policies that have potential for reducing housing costs later down the line” (which I still think is a good counter-example).
In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it’s equivalent to not having existed in the first place.
I don’t think you’ve made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.
I don’t think what I was trying is to make a definitive case for “suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live” I was making a case for something close to “suicide is better than any other measure of suffering-has-exceeded-the-cost-of-continuing-to-live if we want to keep living in a society where we treat humans as free conscious agents and give them rights based on that assumption, and while it is still imperfect, any other arbitrary measure will also be so, but worst” (which is still a case I don’t make perfectly, but at least one I could argue I’m creeping towards).
My base assumption here is that in a society of animal-killers, the ball is in the court of the animal-antinatalists to come up with a sufficient argument to justify the (human-pleasure-reducing) change. But it seems to me like the intuitions based on which we breed&kill animals are almost never spelled out, so I tried to give words to what I hoped might be a common intuition as to why we are fine with breeding&killing animals but not humans.
Here you’re seemingly willing to acknowledge that it’s at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you’re acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.
I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don’t consent to, but still I don’t engage in those actions because I think it’s preferable to treat them as agentic beings that can make their own choices about what makes them happy.
If I give that same “agentic being” treatment to animals, then the suicide argument kind-of-hold. If I don’t give that same “agentic being” treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex “reasoning” machine but I don’t feel any moral guilt when plucking a leaf or a mushroom.
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.
I’m not sure it is unique to suicide, and regardless I’d imagine we’d have to take it on a case by case basis because evolution is messy. I think whether it leads to dystopia or not is not a useful way to determine if it actually describes reality.
Regardless, the argument I’m trying to make is not that this model I described is the correct model, but that it’s at least a plausible model and that there are probably other plausible models and if there are such alternative plausible models then you have to seriously engage them before you can make a considered decision that the suicide rate is a good proxy for value of animal life.
This is not really analogous, in that my example is “potential to reduce suffering” vs “obviously reducing suffering”. A telescope is neither of those, it’s working towards what I’d argue is more of a transcedent goal.
Yes, I agree that along that dimension it is not analogous. I was using it as an example of the fact that addressing more than one different issue is possible when the resources available are equal to or greater than the sum of resources required to address each issue.
I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don’t consent to, but still I don’t engage in those actions because I think it’s preferable to treat them as agentic beings that can make their own choices about what makes them happy.
I think my point was that until you’re willing to put a semblance of confidence levels on your beliefs, then you’re making it easy to succumb to inconsistent actions.
How possible is it that we don’t understand the mental lives of animals well enough to use the suicide argument? What are the costs if we’re wrong? What are the costs if we forgo eating them?
Most of society has agreed that actually yes we should coerce some humans into actions that they don’t consent to. See laws, prisons, etc. This is because we can look at individual cases, weigh the costs and benefits, and act accordingly. A generalized principle of “prefer to treat them as agentic beings with exceptions” is how most modern societies currently work. (How effective we are at that seems to vary widely...but I think most would agree that it’s better than the alternative.)
Regardless, I’m not sure that arranging our food chain to lessen or eliminate the number of animals born to be eaten actually intersects with interfering with independent agents abilities to self-determine. If it did, it seems like we are failing in a major way by not encouraging everyone to bring as many possible humans into existence as possible until we’re all living at the subsistence level.
People mostly don’t commit suicide just because they’re living at such a level. Thus, I think by your argument, we are doing the wrong thing by not increasing the production of humans greatly. However, I think most people’s moral intuitions cut against that course of action.
Nonhuman animals and children have limited agency, irrational and poorly informed preferences. We should use behaviour as an indication of preferences, but not only behaviour and especially not only behaviour when faced with the given situation (since other behaviour is also relevant). We should try to put ourselves in their shoes and reason about what they would want were they more rational and better informed. The more informed and rational, the more we can just defer to their choices.
If I give that same “agentic being” treatment to animals, then the suicide argument kind-of-hold. If I don’t give that same “agentic being” treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex “reasoning” machine but I don’t feel any moral guilt when plucking a leaf or a mushroom.
I think this is a good discussion of evidence for the capacity to suffer in several large taxa of animals.
I think also not having agency is not a defeater for suffering. You can imagine in some of our worst moments of suffering that we lose agency (e.g. in a state of panic), or that we could artificially disrupt someone’s agency (e.g. through transcranial magnetic stimulation, drugs or brain damage) without taking the unpleasantness of an experience away. Just conceptually, agency isn’t required for hedonistic experience.
Yes, but that’s because it’s a tautology!
I don’t think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:
I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide “decisions”.
In the world where there is no built-in prohibition against ending your own life, if the “enjoys life” indicator is at level 10 and the “hates life” indicator is at level 11, then suicide is on the table.
In, what I think is probably our world, when the “enjoys life” indicator is at level 10 the “hates life” indicator has to be at level 50.
What’s more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.
If this holds true, then own-life-valuing indicator addon would only be there for a being that already exists.
Here you’re seemingly willing to acknowledge that it’s at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you’re acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.
This seems to me similar to the arguments made akin to “why waste money on space telescopes (or whatever) when people are going hungry right here on earth?”.
Neither reducing the suffering of beings that can most certainly suffer and those that might be suffering seems likely to consume all of our suffering-diminishing potential. Maybe we can conclude that the likelihood of farm animals suffering in a way that we should care about is so low as to be worth absolutely no suffering-diminishing potential, but I don’t think you’ve made that case.
In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it’s equivalent to not having existed in the first place.
I don’t think you’ve made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.
(Hilariously, I’m not a vegan or a vegetarian.)
You bring up good points, I don’t have time to answer in full, but notes on a few of them to which I can properly retort:
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.
This is not really analogous, in that my example is “potential to reduce suffering” vs “obviously reducing suffering”. A telescope is neither of those, it’s working towards what I’d argue is more of a transcedent goal.
It’s more like arguing “Let’s give homeless people a place to sleep now, rather than focusing on market policies that have potential for reducing housing costs later down the line” (which I still think is a good counter-example).
I don’t think what I was trying is to make a definitive case for “suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live” I was making a case for something close to “suicide is better than any other measure of suffering-has-exceeded-the-cost-of-continuing-to-live if we want to keep living in a society where we treat humans as free conscious agents and give them rights based on that assumption, and while it is still imperfect, any other arbitrary measure will also be so, but worst” (which is still a case I don’t make perfectly, but at least one I could argue I’m creeping towards).
My base assumption here is that in a society of animal-killers, the ball is in the court of the animal-antinatalists to come up with a sufficient argument to justify the (human-pleasure-reducing) change. But it seems to me like the intuitions based on which we breed&kill animals are almost never spelled out, so I tried to give words to what I hoped might be a common intuition as to why we are fine with breeding&killing animals but not humans.
I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don’t consent to, but still I don’t engage in those actions because I think it’s preferable to treat them as agentic beings that can make their own choices about what makes them happy.
If I give that same “agentic being” treatment to animals, then the suicide argument kind-of-hold. If I don’t give that same “agentic being” treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex “reasoning” machine but I don’t feel any moral guilt when plucking a leaf or a mushroom.
I’m not sure it is unique to suicide, and regardless I’d imagine we’d have to take it on a case by case basis because evolution is messy. I think whether it leads to dystopia or not is not a useful way to determine if it actually describes reality.
Regardless, the argument I’m trying to make is not that this model I described is the correct model, but that it’s at least a plausible model and that there are probably other plausible models and if there are such alternative plausible models then you have to seriously engage them before you can make a considered decision that the suicide rate is a good proxy for value of animal life.
Yes, I agree that along that dimension it is not analogous. I was using it as an example of the fact that addressing more than one different issue is possible when the resources available are equal to or greater than the sum of resources required to address each issue.
I think my point was that until you’re willing to put a semblance of confidence levels on your beliefs, then you’re making it easy to succumb to inconsistent actions.
How possible is it that we don’t understand the mental lives of animals well enough to use the suicide argument? What are the costs if we’re wrong? What are the costs if we forgo eating them?
Most of society has agreed that actually yes we should coerce some humans into actions that they don’t consent to. See laws, prisons, etc. This is because we can look at individual cases, weigh the costs and benefits, and act accordingly. A generalized principle of “prefer to treat them as agentic beings with exceptions” is how most modern societies currently work. (How effective we are at that seems to vary widely...but I think most would agree that it’s better than the alternative.)
Regardless, I’m not sure that arranging our food chain to lessen or eliminate the number of animals born to be eaten actually intersects with interfering with independent agents abilities to self-determine. If it did, it seems like we are failing in a major way by not encouraging everyone to bring as many possible humans into existence as possible until we’re all living at the subsistence level.
People mostly don’t commit suicide just because they’re living at such a level. Thus, I think by your argument, we are doing the wrong thing by not increasing the production of humans greatly. However, I think most people’s moral intuitions cut against that course of action.
Nonhuman animals and children have limited agency, irrational and poorly informed preferences. We should use behaviour as an indication of preferences, but not only behaviour and especially not only behaviour when faced with the given situation (since other behaviour is also relevant). We should try to put ourselves in their shoes and reason about what they would want were they more rational and better informed. The more informed and rational, the more we can just defer to their choices.
I think this is a good discussion of evidence for the capacity to suffer in several large taxa of animals.
I think also not having agency is not a defeater for suffering. You can imagine in some of our worst moments of suffering that we lose agency (e.g. in a state of panic), or that we could artificially disrupt someone’s agency (e.g. through transcranial magnetic stimulation, drugs or brain damage) without taking the unpleasantness of an experience away. Just conceptually, agency isn’t required for hedonistic experience.