At some point, Utilitarians are going to have to come up with a specific weighting of moral patienthood weight between different individuals.
By their behavior, we can see that it’s highly variable (a factor of hundreds or more, even between different existing humans, much more between some existing humans and most potential humans, and more still between humans and animals, and between different animal individuals). It’s hard to take the project to be legible about effectiveness in altruism if this difference remains unacknowledged and without any attempt to calculate it.
For me (not a Utilitarian, so discount my opinion as you see fit), I think that slight increases in a human enjoyment is worth large losses in even mammals, let alone fish or insects.
I don’t think Utilitarians are a sufficiency homogenous group for them all to agree to any kind of specific weighting. And I don’t really see why that is a problem. Each individual utilitarian might be internally coherent, that doesn’t mean the group will agree on anything much or be coherent taken together.
You say you are not a utilitarian, and then you offer a utilitarian argument (my understanding of your argument: fish suffering is worth human enjoyment). Maybe we are using the words differently, but I would say anyone who is trying to weigh up the suffering/pleasure on either side of a decision to determine its morality is fundamentally a utilitarian.
Many (most?) people do not approach ethics in this way at all. They take axioms like “murder is wrong” or “eating fish is natural” and the pleasure or suffering that follows as a consequence of the actions taken is irrelevant to their morality.
True, I didn’t mean that Utilitarians must agree on a weighting, but that each person who makes a Utilitarian-based argument for behavior change must have this weighting as part of their model. And that the conversion factor across individuals is a valid point of disagreement, even among those who share a general framework.
I am neither a utilitarian nor a deontologist (not sure precisely what I am, mostly a “muddled human mess”, with MANY of my actions and beliefs illegible, even to me). But I’m happy to discuss the effects of various frameworks, and I (perhaps mistakenly) took the post to be a utilitarian-like framework for recognizing one kind of suffering, presumably with the intent to reduce it.
I also have no idea what I am. Maybe something in the vein of something I think Hume proposed, where you are a kind of second-order utilitarian. (You use utilitarianism to determine a set of rules of thumb, you then follow those rules of thumb instead of actually being a utilitarian.)
Apologies if I misunderstood—on re-reading it, I don’t actually see any explicit conclusions or weighting framework for decision-making based on the assertions made. I did assume you were implying some Utilitarian-like model where fish have moral weight, enough to override human preferences.
I challenge you to make any coherent argument for this. After a lifetime of studying brain function, I can’t justify eating factory farmed meat, because there’s no logical threshold between human brain function wrt suffering, and simpler animals. It’s on a spectrum of complexity, not a binary category.
You’re free to have whatever values you want, but not to claim consistency where it doesn’t exist. Saying you care about some suffering and not other suffering isn’t logically coherent—unless you really don’t care about any suffering, just keeping the right social loyalties. Most humans just haven’t bothered to work through a logically consistent ethical framework—they just care about what their ingroup cares about, since that is easy and pragmatically useful.
I’m not sure exactly what you want me to provide a “coherent argument for”, but I’ll take it as my decision to eat meat and fish (including factory-farmed. Including veal and foie gras. Excluding cetacians and endangered species for diversity-preservation reasons, and excluding humans, mostly for social-cost reasons, I think).
It’s quite consistent to say that I care about both pleasure/satisfaction and suffering (and other dimensions of experience), and that some pleasures outweigh some (or even most) suffering. It may mean that I’m a jerk or a utility monster, but it doesn’t mean that I’m incoherent. Note that part of my beliefs is that there a LOT of complexity which is hard to communicate, and things that may appear inconsistent to you are actually different situations to me.
It’s a complex topic, and I probably shouldn’t have jumped in on the comments section. You are of course welcome to your own preferences; I only take issue with claims of logical consistency. Fish either do or don’t suffer relative to the human suffering we care about; it’s a question of fact, not preference. It’s worth a whole post about the brain mechanisms underlying suffering. But I haven’t written that post, because alignment seems more important, and I might just put it off until the small matter of human survival is settled. I also prioritize humans drastically over other animals but I’m pretty sure my ethical behavior is logically inconsistent and only pragmatically useful.
I don’t claim to know whether fish suffer, nor really how much they suffer. I will say that I distinguish pain from suffering (in an imperfectly modeled way, but consistent with the saying “Pain is inevitable, suffering is optional”), and I’m suspicious that neurological studies I’ve seen seem to conflate the two.
The question of importance (how much weight to give such suffering) is, as you say, more important. I find it less important than the enjoyment of cheap, available seafood. You don’t have to. This is a valid difference of preference without either of us being wrong.
Saying you care about some suffering and not other suffering isn’t logically coherent
Why? What’s logically incoherent about it?
I care about the suffering of people, but not the suffering (or, perhaps more accurately, “suffering”) of non-people. This seems straightforward enough.
Is there some logical theorem that dictates that I should care about things other than that which I do, in fact, care about? How can there be such a thing?
At some point, Utilitarians are going to have to come up with a specific weighting of moral patienthood weight between different individuals.
By their behavior, we can see that it’s highly variable (a factor of hundreds or more, even between different existing humans, much more between some existing humans and most potential humans, and more still between humans and animals, and between different animal individuals). It’s hard to take the project to be legible about effectiveness in altruism if this difference remains unacknowledged and without any attempt to calculate it.
For me (not a Utilitarian, so discount my opinion as you see fit), I think that slight increases in a human enjoyment is worth large losses in even mammals, let alone fish or insects.
I don’t think Utilitarians are a sufficiency homogenous group for them all to agree to any kind of specific weighting. And I don’t really see why that is a problem. Each individual utilitarian might be internally coherent, that doesn’t mean the group will agree on anything much or be coherent taken together.
You say you are not a utilitarian, and then you offer a utilitarian argument (my understanding of your argument: fish suffering is worth human enjoyment). Maybe we are using the words differently, but I would say anyone who is trying to weigh up the suffering/pleasure on either side of a decision to determine its morality is fundamentally a utilitarian.
Many (most?) people do not approach ethics in this way at all. They take axioms like “murder is wrong” or “eating fish is natural” and the pleasure or suffering that follows as a consequence of the actions taken is irrelevant to their morality.
True, I didn’t mean that Utilitarians must agree on a weighting, but that each person who makes a Utilitarian-based argument for behavior change must have this weighting as part of their model. And that the conversion factor across individuals is a valid point of disagreement, even among those who share a general framework.
I am neither a utilitarian nor a deontologist (not sure precisely what I am, mostly a “muddled human mess”, with MANY of my actions and beliefs illegible, even to me). But I’m happy to discuss the effects of various frameworks, and I (perhaps mistakenly) took the post to be a utilitarian-like framework for recognizing one kind of suffering, presumably with the intent to reduce it.
Thanks for clarifying, that makes sense.
I also have no idea what I am. Maybe something in the vein of something I think Hume proposed, where you are a kind of second-order utilitarian. (You use utilitarianism to determine a set of rules of thumb, you then follow those rules of thumb instead of actually being a utilitarian.)
No part of my argument assumes that one is a utilitarian.
Apologies if I misunderstood—on re-reading it, I don’t actually see any explicit conclusions or weighting framework for decision-making based on the assertions made. I did assume you were implying some Utilitarian-like model where fish have moral weight, enough to override human preferences.
If that’s NOT your position, please clarify.
I challenge you to make any coherent argument for this. After a lifetime of studying brain function, I can’t justify eating factory farmed meat, because there’s no logical threshold between human brain function wrt suffering, and simpler animals. It’s on a spectrum of complexity, not a binary category.
You’re free to have whatever values you want, but not to claim consistency where it doesn’t exist. Saying you care about some suffering and not other suffering isn’t logically coherent—unless you really don’t care about any suffering, just keeping the right social loyalties. Most humans just haven’t bothered to work through a logically consistent ethical framework—they just care about what their ingroup cares about, since that is easy and pragmatically useful.
I’m not sure exactly what you want me to provide a “coherent argument for”, but I’ll take it as my decision to eat meat and fish (including factory-farmed. Including veal and foie gras. Excluding cetacians and endangered species for diversity-preservation reasons, and excluding humans, mostly for social-cost reasons, I think).
It’s quite consistent to say that I care about both pleasure/satisfaction and suffering (and other dimensions of experience), and that some pleasures outweigh some (or even most) suffering. It may mean that I’m a jerk or a utility monster, but it doesn’t mean that I’m incoherent. Note that part of my beliefs is that there a LOT of complexity which is hard to communicate, and things that may appear inconsistent to you are actually different situations to me.
It’s a complex topic, and I probably shouldn’t have jumped in on the comments section. You are of course welcome to your own preferences; I only take issue with claims of logical consistency. Fish either do or don’t suffer relative to the human suffering we care about; it’s a question of fact, not preference. It’s worth a whole post about the brain mechanisms underlying suffering. But I haven’t written that post, because alignment seems more important, and I might just put it off until the small matter of human survival is settled. I also prioritize humans drastically over other animals but I’m pretty sure my ethical behavior is logically inconsistent and only pragmatically useful.
I don’t claim to know whether fish suffer, nor really how much they suffer. I will say that I distinguish pain from suffering (in an imperfectly modeled way, but consistent with the saying “Pain is inevitable, suffering is optional”), and I’m suspicious that neurological studies I’ve seen seem to conflate the two.
The question of importance (how much weight to give such suffering) is, as you say, more important. I find it less important than the enjoyment of cheap, available seafood. You don’t have to. This is a valid difference of preference without either of us being wrong.
Why? What’s logically incoherent about it?
I care about the suffering of people, but not the suffering (or, perhaps more accurately, “suffering”) of non-people. This seems straightforward enough.
Is there some logical theorem that dictates that I should care about things other than that which I do, in fact, care about? How can there be such a thing?