I personally don’t think it has much to do with moral worth, actually. It’s very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of “moral worth” into some components like (blatantly making names up here) “decision-theoretic empathy” (agents and instances where it’s rational for me to acausally cooperate), “altruism” (using my models of others’ values as a direct component of my own values, often derived from actual psychological empathy), and even “love” (outright personal attachment to another agent for my own reasons—and we’d usually say love should imply altruism).
So we might want to be altruistic towards chickens, but I personally don’t think chickens possess some magical valence that stops them from being “made of atoms I can use for something else”, other than the general fact that I feel some very low level of altruism and empathy towards chickens.
Yes! I am very glad someone else is making this point, since sometimes it can seem like (on a System 1 level, even if System 2 I know it’s obviously false that) in my networks everyone’s gone mad identifying ‘consciousness’ with ‘moral weight’, going ethical vegetarian, and possibly prioritising animal suffering over x-risk and other astronomical-or-higher leverage causes.
Funny. That’s how I feel about “existential risk”! It’s “neoliberalized” to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole “what do we actually want, when you get right down to it?” question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.
You’re right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of ‘goodness is monotonic increasing in consciousness’, it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I’ve noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective.
That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I’m not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they’re more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn’t.
Why should your True Preferences have to be selfish? I mean, there’s a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it.
I had just meant to contrast “x-risk prevention as maximally effective altruism” with “malaria nets et al for actually existing people as effective altruism”.
Why should your True Preferences have to be selfish?
What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing.
Any given outcome might have hints that it’s part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc.
Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).
Yes! I am very glad someone else is making this point, since sometimes it can seem like (on a System 1 level, even if System 2 I know it’s obviously false that) in my networks everyone’s gone mad identifying ‘consciousness’ with ‘moral weight’, going ethical vegetarian, and possibly prioritising animal suffering over x-risk and other astronomical-or-higher leverage causes.
Funny. That’s how I feel about “existential risk”! It’s “neoliberalized” to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole “what do we actually want, when you get right down to it?” question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.
You’re right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of ‘goodness is monotonic increasing in consciousness’, it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I’ve noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective.
That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I’m not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they’re more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn’t.
Why should your True Preferences have to be selfish? I mean, there’s a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it.
I had just meant to contrast “x-risk prevention as maximally effective altruism” with “malaria nets et al for actually existing people as effective altruism”.
What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing.
Any given outcome might have hints that it’s part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc.
Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).