Funny. That’s how I feel about “existential risk”! It’s “neoliberalized” to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole “what do we actually want, when you get right down to it?” question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.
You’re right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of ‘goodness is monotonic increasing in consciousness’, it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I’ve noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective.
That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I’m not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they’re more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn’t.
Why should your True Preferences have to be selfish? I mean, there’s a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it.
I had just meant to contrast “x-risk prevention as maximally effective altruism” with “malaria nets et al for actually existing people as effective altruism”.
Why should your True Preferences have to be selfish?
What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing.
Any given outcome might have hints that it’s part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc.
Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).
Funny. That’s how I feel about “existential risk”! It’s “neoliberalized” to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole “what do we actually want, when you get right down to it?” question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.
You’re right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of ‘goodness is monotonic increasing in consciousness’, it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I’ve noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective.
That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I’m not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they’re more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn’t.
Why should your True Preferences have to be selfish? I mean, there’s a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it.
I had just meant to contrast “x-risk prevention as maximally effective altruism” with “malaria nets et al for actually existing people as effective altruism”.
What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing.
Any given outcome might have hints that it’s part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc.
Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).