I agree with the second paragraph of steven0461′s comment.
The present posting ignores the impact of signing up for cryonics / donating to VillageReach on existential risk which should outweigh all other considerations in utilitarian expected value.
I presently believe that for most people who are interested x-risk reduction, the expected x-risk reduction of signing up for cryonics is lower than that of the expected x-risk reduction of donating to VillageReach. My thinking here is that donating to VillageReach signals philanthropic intention and affords networking opportunities with other people who care about global welfare who might be persuaded to work against x-risk whereas signing up for cryonics signals weirdness to everyone outside of a very narrow set of people.
As to most people not being capable of being convinced of cryonics, I strongly doubt that this is the case. It’s a huge uphill battle no doubt but given enough dollars towards PR (or enough intelligently done promotion by unpaid advocates on the web) it can be done.
The beneficial impact of signing up for cryonics on x-risk reduction seems to me to be predicated on the possibility of spreading cryonics to a population positioned to decrease x-risk who would not work to decrease x-risk if they were not signed up for cryonics.
I would expect the existential risk reduction returns from encouraging long-term thinking by getting people to sign up for cryonics to be dwarfed by the returns from encouraging long-term thinking directly, and I would expect those returns to be dwarfed by the returns from encouraging rational long-term thinking on especially important topics.
Donating to VillageReach signals philanthropic intention and affords networking opportunities with other people who care about global welfare who might be persuaded to work against x-risk
Also, donating to VillageReach saves people’s lives, and those people will have agency and abilities and may very well contribute to existential risk reduction.
They will still come from a very poorly educated area of the world. I think the effect is overall a little unclear (it might stabilize that area of the world somewhat, which would have positive spillover for everyone else; or the population increase will spark additional conflict that has negative spillover).
Should we also work to boost birth rates in all areas of the world? Because we are working against that goal in some key ways. It is hard to control all the variables but there is very convincing evidence that modernization affects birth rates in developing countries in a number of ways. Including influences of cost of children, productivity of children, and education of women.
But they may also contribute to existential risk increase. What sort of calculation have you made that makes you think these people are more likely to contribute to existential risk increase?
I don’t think I’ve seen any reasonable argument that can be made that simply having more random people around will help deal with existential risk. Most likely existential risks (UFAI, grey goo, bioterrorism, whatever) will be caused by people afterall.
I agree with the second paragraph of steven0461′s comment.
The present posting ignores the impact of signing up for cryonics / donating to VillageReach on existential risk which should outweigh all other considerations in utilitarian expected value.
I presently believe that for most people who are interested x-risk reduction, the expected x-risk reduction of signing up for cryonics is lower than that of the expected x-risk reduction of donating to VillageReach. My thinking here is that donating to VillageReach signals philanthropic intention and affords networking opportunities with other people who care about global welfare who might be persuaded to work against x-risk whereas signing up for cryonics signals weirdness to everyone outside of a very narrow set of people.
However, as Carl Shulman has remarked:
And lsparrish has written:
The beneficial impact of signing up for cryonics on x-risk reduction seems to me to be predicated on the possibility of spreading cryonics to a population positioned to decrease x-risk who would not work to decrease x-risk if they were not signed up for cryonics.
I would expect the existential risk reduction returns from encouraging long-term thinking by getting people to sign up for cryonics to be dwarfed by the returns from encouraging long-term thinking directly, and I would expect those returns to be dwarfed by the returns from encouraging rational long-term thinking on especially important topics.
That would make cryonics a self-serving reward that utilitarians award themselves after doing some good deeds.
It’s not hypocritical if we acknowledge that our values are partially but not completely selfish.
Yes, I can imagine that position. I was more curious to see if anyone else was going to try and make a utilitarian case for it.
Also, donating to VillageReach saves people’s lives, and those people will have agency and abilities and may very well contribute to existential risk reduction.
They will still come from a very poorly educated area of the world. I think the effect is overall a little unclear (it might stabilize that area of the world somewhat, which would have positive spillover for everyone else; or the population increase will spark additional conflict that has negative spillover).
Should we also work to boost birth rates in all areas of the world? Because we are working against that goal in some key ways. It is hard to control all the variables but there is very convincing evidence that modernization affects birth rates in developing countries in a number of ways. Including influences of cost of children, productivity of children, and education of women.
But they may also contribute to existential risk increase. What sort of calculation have you made that makes you think these people are more likely to contribute to existential risk increase?
I don’t think I’ve seen any reasonable argument that can be made that simply having more random people around will help deal with existential risk. Most likely existential risks (UFAI, grey goo, bioterrorism, whatever) will be caused by people afterall.