But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality.
I like this quote a lot—I feel like it captures a lot of why I don’t like suffering-focused ethics. It also seems very related to beliefs about the moral value of animals: my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it’s much less clearly important.
I also like the quote. I consider meaning and fulfillment of life goals morally important, so I’m against one-dimensional approaches to ethics.
However, I think it’s a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.
Firstly, “suffering-focused ethics” is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).
Second, even if (as I do from here on) we assume that you’re talking about “exclusively suffering-focused views/axiologies,” which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I’d like to point out that the same criticism (of “map-and-territory confusion”) applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).
To elaborate, I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
The IMO best answer to “What constitutes (morally relevant) suffering?” is something that’s always important to the being that suffers. I.e., suffering is always bad (or, in its weakest forms, suboptimal) from the perspective of the being that suffers. I would define suffering as an experienced need to change something about one’s current experience. (Or end said experience, in the case of extreme suffering.)
(Of course, not everyone who subscribes to a form of suffering-focused ethics would see it that way – e.g., people who see the experience of pain asymbolia as equally morally disvaluable as what we ordinary call “pain” have a different conception of suffering. Similarly, I’m not sure whether Brian Tomasik’s pan-everythingism about everything would give the same line of reasoning as I would for caring a little about “electron suffering,” or whether this case is so different and unusual that we have to see it as essentially a different concept.)
And, yeah, bringing to our mind the distinction between map and territory, when we focus on the suffering beings and not the suffering itself, we can see that there are some sentient beings (“moral persons” according to Singer) to whom things other than their experiences can be important.
Still, I think the charge “you confuse the map for the territory, the measure for the man, the math with reality” sticks much better against classical hedonistic utilitarianism. After all, take the classical utilitarian’s claim “pleasure is good.” I’ve written about this in a short form on the EA forum. As I would summarize it now, when we talk about “pleasure is good,” there are two interpretations behind this that can be used for motte-and-bailey. I will label these two claims “uncontroversial” and “controversial.” Note how the uncontroversial claim has only vague implications, whereas the controversial one has huge and precise implications (maximizing hedonist axiology).
(1) Uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is what we higher-order desire.
This uncontroversial claim is compatible with “other things also matter morally.”
(For comparison, the uncontroversial interpretation for “suffering is bad” is “all else equal, suffering is always [at least a bit] objectionable, and often something we higher-order desire against.”)
(2) Controversial claim: When we say that pleasure is good, what we mean is that we ought to be personal hedonist maximizers. This includes claims like “all else equal, more pleasure is always better than less pleasure,” among a bunch of other things.
“All else equal, more pleasure is always better than less pleasure” seems false. At the very least, it’s really controversial (that’s why it’s not part of the the uncontroversial claim, where it just says “pleasure is always unobjectionable.”)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
By contrast, “all else equal, more suffering is always worse than less suffering” seems to check out – that’s part of the uncontroversial interpretation of “suffering is bad.”
So, “more suffering is always worse” is uncontroversial, while “more intensity of positive experience is always better (in a sense that matters morally and is worth tradeoffs)” is controversial.
That’s why I said the following earlier on in my comment here:
I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
But “maximize hedons” isn’t.
The point to notice for proponents of an exclusively suffering-focused axiology is that humans have two motivational systems, not just the system-1 motivation that I see as being largely about the prevention of short-term cravings/suffering. Next to that, there’s also also higher-order, “reflective” desires. These reflective desires are often (though not in everyone) about (specific forms of) happiness or things other than experiences (or, as a perhaps better way to express this, they are also about how specific experiences are embedded in the world, their contact with reality.)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.
What thought process do you think goes into your guess that very few non-human animals can leave a meaningful and fulfilling life? My guess is that many mammals and birds can live a meaningful and fulfilling life, though the phrase “meaningful and fulfilling” strikes me as hard to specify. I’m mostly thinking that having emotionally significant social bonds with other individuals is sufficient for a life to be meaningful and fulfilling, and that many mammals and birds can form emotionally significant social bonds.
And if emotionally significant social bonds don’t count, it seems like we could be throwing away what humans typically find most important in their lives.
Of course, I think there are potentially important differences. I suspect humans tend to be willing to sacrifice or suffer much more for those they love than (almost?) all other animals. Grief also seems to affect humans more (longer, deeper), and it’s totally absent in many animals.
On the other hand, I guess some other animals will fight to the death to protect their offspring. And some die apparently grieving. This seems primarily emotionally driven, but I don’t think we should discount it for that fact. Emotions are one way of making evaluations, like other kinds of judgements of value.
EDIT: Another possibility is that other animals form such bonds and could even care deeply about them, but don’t find them “meaningful” or “fulfilling” at all or in a way as important as humans do. Maybe those require higher cognition, e.g. concepts of meaning and fulfillment. But it seems to me that the deep caring, in just emotional and motivational terms, should be enough?
I think that unless we can find a specific causal relationship implying that the capacity to form social bonds increases overall well-being capacity, we should assume that attaching special importance to this capacity is merely a product of human bias.
Humans typically assign an animal’s capacity for wellbeing and meaningful experience based on a perceived overlap, or shared experience. As though humans are this circle in a Ven diagram, and the extent to which our circle overlaps with an iguana’s circle is the extent to which that iguana has meaningful experience.
I think this is clearly fallacious. An iguana has their own circle, maybe the circle is smaller, but there’s a huge area of non-overlap that we can’t just entirely discount because we’re unable to relate to it. We can’t define meaningful experience by how closely it resembles human experience.
I would be surprised if iguanas find things meaningful that humans don’t find meaningful, but maybe they desire some things pretty alien to us. I’m also not sure they find anything meaningful at all, but that depends on how we define meaningfulness.
Still, I think focusing on meaningfulness is also too limited. Iguanas find things important to them, meaningful or not. Desires, motivation, pleasure and suffering all assign some kind of importance to things.
In my view, either
capacity for welfare is something we can measure and compare based on cognitive effects, like effects on attention, in which case it would be surprising if other verteberates, say, had tiny capacities for welfare relative to humans, or
interpersonal utility comparisons can’t be grounded, so there aren’t any grounds to say iguanas have lower (or higher) capacities for welfare than humans, assuming they have any at all.
I would be interested in an explanation of how the quote captures why you don’t like suffering focused ethics.
My (possibly nieve) perspective is that people who downplay the relative moral significance of suffering just have a lack of relevant experience when it comes to qualia states.
If someone hasn’t experienced certain levels of suffering over certain durations, how can they reasonably judge that hundreds of billions of years worth of those experiences are relatively insignificant?
If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it’s much less clearly important.
It’s hard for me not to interpret the word ‘care’ here as relating to attention, rather than intrinsic values. To me it seems like if someones attention were calibrated such that they had a deep understanding of the implication of billions of animals having surgery done on them without anesthesia, while also understanding the implications of people potentially having marginally more meaningful lives, they would generally consider the animal issue to be more pressing.
I’m quite interested in what you might think I’m missing. I often find myself very confused about people’s perspectives here.
my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it’s much less clearly important
I like this quote a lot—I feel like it captures a lot of why I don’t like suffering-focused ethics. It also seems very related to beliefs about the moral value of animals: my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it’s much less clearly important.
I also like the quote. I consider meaning and fulfillment of life goals morally important, so I’m against one-dimensional approaches to ethics.
However, I think it’s a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.
Firstly, “suffering-focused ethics” is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).
Second, even if (as I do from here on) we assume that you’re talking about “exclusively suffering-focused views/axiologies,” which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I’d like to point out that the same criticism (of “map-and-territory confusion”) applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).
To elaborate, I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
The IMO best answer to “What constitutes (morally relevant) suffering?” is something that’s always important to the being that suffers. I.e., suffering is always bad (or, in its weakest forms, suboptimal) from the perspective of the being that suffers. I would define suffering as an experienced need to change something about one’s current experience. (Or end said experience, in the case of extreme suffering.)
(Of course, not everyone who subscribes to a form of suffering-focused ethics would see it that way – e.g., people who see the experience of pain asymbolia as equally morally disvaluable as what we ordinary call “pain” have a different conception of suffering. Similarly, I’m not sure whether Brian Tomasik’s pan-everythingism about everything would give the same line of reasoning as I would for caring a little about “electron suffering,” or whether this case is so different and unusual that we have to see it as essentially a different concept.)
And, yeah, bringing to our mind the distinction between map and territory, when we focus on the suffering beings and not the suffering itself, we can see that there are some sentient beings (“moral persons” according to Singer) to whom things other than their experiences can be important.
Still, I think the charge “you confuse the map for the territory, the measure for the man, the math with reality” sticks much better against classical hedonistic utilitarianism. After all, take the classical utilitarian’s claim “pleasure is good.” I’ve written about this in a short form on the EA forum. As I would summarize it now, when we talk about “pleasure is good,” there are two interpretations behind this that can be used for motte-and-bailey. I will label these two claims “uncontroversial” and “controversial.” Note how the uncontroversial claim has only vague implications, whereas the controversial one has huge and precise implications (maximizing hedonist axiology).
(1) Uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is what we higher-order desire.
This uncontroversial claim is compatible with “other things also matter morally.”
(For comparison, the uncontroversial interpretation for “suffering is bad” is “all else equal, suffering is always [at least a bit] objectionable, and often something we higher-order desire against.”)
(2) Controversial claim: When we say that pleasure is good, what we mean is that we ought to be personal hedonist maximizers. This includes claims like “all else equal, more pleasure is always better than less pleasure,” among a bunch of other things.
“All else equal, more pleasure is always better than less pleasure” seems false. At the very least, it’s really controversial (that’s why it’s not part of the the uncontroversial claim, where it just says “pleasure is always unobjectionable.”)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
By contrast, “all else equal, more suffering is always worse than less suffering” seems to check out – that’s part of the uncontroversial interpretation of “suffering is bad.”
So, “more suffering is always worse” is uncontroversial, while “more intensity of positive experience is always better (in a sense that matters morally and is worth tradeoffs)” is controversial.
That’s why I said the following earlier on in my comment here:
But “maximize hedons” isn’t.
The point to notice for proponents of an exclusively suffering-focused axiology is that humans have two motivational systems, not just the system-1 motivation that I see as being largely about the prevention of short-term cravings/suffering. Next to that, there’s also also higher-order, “reflective” desires. These reflective desires are often (though not in everyone) about (specific forms of) happiness or things other than experiences (or, as a perhaps better way to express this, they are also about how specific experiences are embedded in the world, their contact with reality.)
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.
What thought process do you think goes into your guess that very few non-human animals can leave a meaningful and fulfilling life? My guess is that many mammals and birds can live a meaningful and fulfilling life, though the phrase “meaningful and fulfilling” strikes me as hard to specify. I’m mostly thinking that having emotionally significant social bonds with other individuals is sufficient for a life to be meaningful and fulfilling, and that many mammals and birds can form emotionally significant social bonds.
And if emotionally significant social bonds don’t count, it seems like we could be throwing away what humans typically find most important in their lives.
Of course, I think there are potentially important differences. I suspect humans tend to be willing to sacrifice or suffer much more for those they love than (almost?) all other animals. Grief also seems to affect humans more (longer, deeper), and it’s totally absent in many animals.
On the other hand, I guess some other animals will fight to the death to protect their offspring. And some die apparently grieving. This seems primarily emotionally driven, but I don’t think we should discount it for that fact. Emotions are one way of making evaluations, like other kinds of judgements of value.
EDIT: Another possibility is that other animals form such bonds and could even care deeply about them, but don’t find them “meaningful” or “fulfilling” at all or in a way as important as humans do. Maybe those require higher cognition, e.g. concepts of meaning and fulfillment. But it seems to me that the deep caring, in just emotional and motivational terms, should be enough?
Interesting topic
I think that unless we can find a specific causal relationship implying that the capacity to form social bonds increases overall well-being capacity, we should assume that attaching special importance to this capacity is merely a product of human bias.
Humans typically assign an animal’s capacity for wellbeing and meaningful experience based on a perceived overlap, or shared experience. As though humans are this circle in a Ven diagram, and the extent to which our circle overlaps with an iguana’s circle is the extent to which that iguana has meaningful experience.
I think this is clearly fallacious. An iguana has their own circle, maybe the circle is smaller, but there’s a huge area of non-overlap that we can’t just entirely discount because we’re unable to relate to it. We can’t define meaningful experience by how closely it resembles human experience.
I would be surprised if iguanas find things meaningful that humans don’t find meaningful, but maybe they desire some things pretty alien to us. I’m also not sure they find anything meaningful at all, but that depends on how we define meaningfulness.
Still, I think focusing on meaningfulness is also too limited. Iguanas find things important to them, meaningful or not. Desires, motivation, pleasure and suffering all assign some kind of importance to things.
In my view, either
capacity for welfare is something we can measure and compare based on cognitive effects, like effects on attention, in which case it would be surprising if other verteberates, say, had tiny capacities for welfare relative to humans, or
interpersonal utility comparisons can’t be grounded, so there aren’t any grounds to say iguanas have lower (or higher) capacities for welfare than humans, assuming they have any at all.
I would be interested in an explanation of how the quote captures why you don’t like suffering focused ethics.
My (possibly nieve) perspective is that people who downplay the relative moral significance of suffering just have a lack of relevant experience when it comes to qualia states.
If someone hasn’t experienced certain levels of suffering over certain durations, how can they reasonably judge that hundreds of billions of years worth of those experiences are relatively insignificant?
It’s hard for me not to interpret the word ‘care’ here as relating to attention, rather than intrinsic values. To me it seems like if someones attention were calibrated such that they had a deep understanding of the implication of billions of animals having surgery done on them without anesthesia, while also understanding the implications of people potentially having marginally more meaningful lives, they would generally consider the animal issue to be more pressing.
I’m quite interested in what you might think I’m missing. I often find myself very confused about people’s perspectives here.
Very well put