it is a huge mistake to claim that [one’s own cryopreservation] is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.
OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right?
If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words “huge” and “hugely” are justified—but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later.
The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years—they need to travel to Africa or what not and see the faces of the people they have helped—or at least they need to know that if they were to travel to Africa or what not, they would—and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to—their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future).
But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very thin) before the start of the intelligence explosion or they know that their enterprise is extremely inefficient philanthropy relative to x-risks reduction. Do you see any holes in my reasoning?
There are those (e.g., Carl and Nancy in these pages in the last few days and Roko in the past IIRC) who have taken the position that getting people to sign up for cryonics tends to reduce x-risks. I plan a top-level submission with my rebuttal to that position.
I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives.
Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that.
As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don’t think contraception is a crime, for example.
The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it’s not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)
I have bookmarked your comment and will reflect on it.
BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.
I wouldn’t necessarily say that it’s the most effective way to do x-risks advocacy, but it’s one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I’m looking forward to reading more on the case against from you.
I’m worried about cryonics tainting “the whole general field of thinking seriously about the future” by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention.
I’ve never heard of someone coming to LW through an interest in cryonics, though I’m sure there are a few cases.
OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right?
If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words “huge” and “hugely” are justified—but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later.
The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years—they need to travel to Africa or what not and see the faces of the people they have helped—or at least they need to know that if they were to travel to Africa or what not, they would—and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to—their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future).
But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very thin) before the start of the intelligence explosion or they know that their enterprise is extremely inefficient philanthropy relative to x-risks reduction. Do you see any holes in my reasoning?
There are those (e.g., Carl and Nancy in these pages in the last few days and Roko in the past IIRC) who have taken the position that getting people to sign up for cryonics tends to reduce x-risks. I plan a top-level submission with my rebuttal to that position.
I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives.
Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that.
As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don’t think contraception is a crime, for example.
The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it’s not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)
I have bookmarked your comment and will reflect on it.
BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.
I wouldn’t necessarily say that it’s the most effective way to do x-risks advocacy, but it’s one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I’m looking forward to reading more on the case against from you.
I’m worried about cryonics tainting “the whole general field of thinking seriously about the future” by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention.
I’ve never heard of someone coming to LW through an interest in cryonics, though I’m sure there are a few cases.