Thank you for a very thorough post. I think your writing has served me as a more organized account of some of my own impressions opposing longtermisim.
I agree with CrimsonChin in that I think there’s a lot of your post many longtermists would agree with, including the practicality of focusing on short-term sub-goals. Also, I personally believe that initiatives like global health, poverty reduction, etc. probably improve the prospects of the far future, even if their expected value seems less than X-risk mitigation.
Nonetheless, I still think we should be motivated by the immensity of the future even if it is off set by tiny probabilities and there are huge margins of error, because the lower bounds of these estimates appear to me as sufficiently high to be very compelling. The post How Many Lives Does X-Risk Work Save From Nonexistance On Average demonstrates my thinking on this by having estimates of future lives that vary by dozens of orders of magnitude(!) but still arrives at very high expected values for X-Risk work even on the lower bounds.
Even I don’t really feel anything when I read such massive numbers, and I acknowledge how large the intervals of these estimates are, but I wouldn’t say they “make no sense to me” or that ‘To the extent we can quantify existential risks in the far future, we can only say something like “extremely likely,’ ‘possible,’ or ‘can’t be ruled out.’”
For what it’s worth, I use to essentially be an egoist, and was unmoved by all of the charities I had ever encountered. It seemed to me that humanity was on a good trajectory and my personal impact would be negligible. It was only after I started thinking about really large numbers, like the duration of the universe, the age of humanity, the number of potential minds in the universe (credit to SFIA), how neglected these figures were, and moral uncertainty, that I started to feel like I could and should act for others.
There are definitely many, possibly most, contexts where incredibly large or small numbers can be safely disregarded. I wouldn’t be moved by them in adversarial situations, like Pascal’s Mugging, or when doing my day-to-day moral decision making. But for question’s like “What should I really care deeply about?” I think they should be considered.
As for Pascal’s Wager, It calls for picking a very specific God to worship out of a space of infinite possible contradictory gods, and this infinitely small probability of success cancels out the infinite reward of heaven over hell or non-existance. Dissimilarly, Longtermism isn’t committed to any specific action regarding the far future, just the well being of entities in the future generally. I expect that most longtermists would gladly pivot away from a specific cause area (like AI alignment) if they were shown some other cause (E.g. a planet-killing asteroid certainly colliding with Earth in 100 years) was more likely to similarly adversely impact the far future.
I think the whole concept of “saving from non-existence” makes very little sense. Are human souls just floating into a limbo, waiting to be plucked out by the lifeline of a body to inhabit? What are we saving them from? Who are we saving? How bad can a life be before the saving actually counts as damning?
There’s no real answers, especially if you don’t want to get metaphysical. IMO the only forms of utilitarianism that make sense are based on averages, not total sums. The overall size of the denominator matters little in and of itself.
Strong agree. The framework of saving from non existence leads to more problems and confusions than helps.
It’s not the immensity of the future that motivates me to care. If the future consists of immortal people living really awesome and fullfilling lives—its a great future. If doesn’t really matter to me if there are billions or nonilions of them. I would definetely not sacrifice the immortality of people to have more of them in total being instantiated in the universe. The key point is how awesome and fullfilling the lives are, not their number.
Sorry for the late reply, I haven’t commented much on LW and it didn’t appreciate the time it would take for someone to reply to me, so I missed this until now. If I reply to you, Ape in the coat, does that notify dr_s too?
Whenever I say “lives saved” this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future people never being born.[6]
I assume a zero-discount rate for the value of future lives, meaning I assume the value of a life is not dependent on when that life occurs.
It seems pretty obvious to me that in almost any plausible scenario, the lifespan of a distant future entity with moral weight will be very different from what we currently think of as a natural life span (rounded to 100 years in the post I linked), but making estimates in terms of “lives saved from non existence” where life = 100 years is useful for making comparisons to other causes like “lives saved per $1,000 via malaria bed nets.” It also seems appropriate for the post not to assume a discount rate and to leave that to the reader to apply themselves on top of the estimates presented.
I prefer something like “observer moments that might not have occurred” to “lives saved.” I don’t have strong preferences between a relatively small number of entities having long lives or more numerous entities having shorter lives, so long as the quality of the life per moment is held constant.
As for dr_S’s “How bad can a life be before the savings actually counts as damning” this seems easily resolvable to me by just allowing “people” of the far future the right to commit suicide, perhaps after a short waiting period. This would put a floor on the suffering they experience if they can’t otherwise be guaranteed to have great lives.
I don’t presume to tell people what they should care about, and if you feel that thinking of such numbers and probabilities gives you a way to guide your decisions then that’s great.
I would say that, given how much humanity changed in the past and increasing rate of change, probably almost none of us could realistically predict the impact of our actions more than a couple of decades to the future. (Doesn’t mean we don’t try- the institution I work for is more than 350 years old and does try to manage its endowment with a view towards the indefinite future…)
Thank you for a very thorough post. I think your writing has served me as a more organized account of some of my own impressions opposing longtermisim.
I agree with CrimsonChin in that I think there’s a lot of your post many longtermists would agree with, including the practicality of focusing on short-term sub-goals. Also, I personally believe that initiatives like global health, poverty reduction, etc. probably improve the prospects of the far future, even if their expected value seems less than X-risk mitigation.
Nonetheless, I still think we should be motivated by the immensity of the future even if it is off set by tiny probabilities and there are huge margins of error, because the lower bounds of these estimates appear to me as sufficiently high to be very compelling. The post How Many Lives Does X-Risk Work Save From Nonexistance On Average demonstrates my thinking on this by having estimates of future lives that vary by dozens of orders of magnitude(!) but still arrives at very high expected values for X-Risk work even on the lower bounds.
Even I don’t really feel anything when I read such massive numbers, and I acknowledge how large the intervals of these estimates are, but I wouldn’t say they “make no sense to me” or that ‘To the extent we can quantify existential risks in the far future, we can only say something like “extremely likely,’ ‘possible,’ or ‘can’t be ruled out.’”
For what it’s worth, I use to essentially be an egoist, and was unmoved by all of the charities I had ever encountered. It seemed to me that humanity was on a good trajectory and my personal impact would be negligible. It was only after I started thinking about really large numbers, like the duration of the universe, the age of humanity, the number of potential minds in the universe (credit to SFIA), how neglected these figures were, and moral uncertainty, that I started to feel like I could and should act for others.
There are definitely many, possibly most, contexts where incredibly large or small numbers can be safely disregarded. I wouldn’t be moved by them in adversarial situations, like Pascal’s Mugging, or when doing my day-to-day moral decision making. But for question’s like “What should I really care deeply about?” I think they should be considered.
As for Pascal’s Wager, It calls for picking a very specific God to worship out of a space of infinite possible contradictory gods, and this infinitely small probability of success cancels out the infinite reward of heaven over hell or non-existance. Dissimilarly, Longtermism isn’t committed to any specific action regarding the far future, just the well being of entities in the future generally. I expect that most longtermists would gladly pivot away from a specific cause area (like AI alignment) if they were shown some other cause (E.g. a planet-killing asteroid certainly colliding with Earth in 100 years) was more likely to similarly adversely impact the far future.
I think the whole concept of “saving from non-existence” makes very little sense. Are human souls just floating into a limbo, waiting to be plucked out by the lifeline of a body to inhabit? What are we saving them from? Who are we saving? How bad can a life be before the saving actually counts as damning?
There’s no real answers, especially if you don’t want to get metaphysical. IMO the only forms of utilitarianism that make sense are based on averages, not total sums. The overall size of the denominator matters little in and of itself.
Strong agree. The framework of saving from non existence leads to more problems and confusions than helps.
It’s not the immensity of the future that motivates me to care. If the future consists of immortal people living really awesome and fullfilling lives—its a great future. If doesn’t really matter to me if there are billions or nonilions of them. I would definetely not sacrifice the immortality of people to have more of them in total being instantiated in the universe. The key point is how awesome and fullfilling the lives are, not their number.
Sorry for the late reply, I haven’t commented much on LW and it didn’t appreciate the time it would take for someone to reply to me, so I missed this until now. If I reply to you, Ape in the coat, does that notify dr_s too?
If I understand dr_s’s quotation, I believe he’s responding to the post I referenced. How Many Lives Does X-Risk Work Save from Non-Existence includes pretty early on:
It seems pretty obvious to me that in almost any plausible scenario, the lifespan of a distant future entity with moral weight will be very different from what we currently think of as a natural life span (rounded to 100 years in the post I linked), but making estimates in terms of “lives saved from non existence” where life = 100 years is useful for making comparisons to other causes like “lives saved per $1,000 via malaria bed nets.” It also seems appropriate for the post not to assume a discount rate and to leave that to the reader to apply themselves on top of the estimates presented.
I prefer something like “observer moments that might not have occurred” to “lives saved.” I don’t have strong preferences between a relatively small number of entities having long lives or more numerous entities having shorter lives, so long as the quality of the life per moment is held constant.
As for dr_S’s “How bad can a life be before the savings actually counts as damning” this seems easily resolvable to me by just allowing “people” of the far future the right to commit suicide, perhaps after a short waiting period. This would put a floor on the suffering they experience if they can’t otherwise be guaranteed to have great lives.
I don’t presume to tell people what they should care about, and if you feel that thinking of such numbers and probabilities gives you a way to guide your decisions then that’s great.
I would say that, given how much humanity changed in the past and increasing rate of change, probably almost none of us could realistically predict the impact of our actions more than a couple of decades to the future. (Doesn’t mean we don’t try- the institution I work for is more than 350 years old and does try to manage its endowment with a view towards the indefinite future…)