You are telling me that Cryonically suspending myself is less charitable than donating the same resources to an efficient charity? Um… yes?
I don’t think this post contains a non-trivial insight. I found the normative presumptions interspersed with the text distasteful. Multi also presents a misleading image of what best represents the values of most people.
Multi . . . presents a misleading image of what best represents the values of most people.
Yes, but many of the participants on this web site share Multifoliate’s interest in philanthropy. In fact, the site’s subtitle and mission statement, “refining the art of human rationality,” came about as a subgoal of the philanthropic goals of the site’s founder.
I don’t think this post contains a non-trivial insight.
I found it a good answer to the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.
the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.
Is that belief really common around here? Though I’m inclined to make an effort to get Hitchens to sign up, I think of that effort as self-indulgence in much the same way as I’d think of such efforts for those close to me, or my own decision to sign up.
OK, maybe “common belief” is too strong. Change it to, “make sure no one here is under the illusion that cryonics advocacy is an efficient form of philanthropy, rather than a way to protect one’s own interests while meeting like-minded people and engaging in an inefficient form of philanthropy, though I personally doubt that it decreases x-risks.”
I think there are different approaches to cryonics. Advocating global or wide-scale conversion to cryonics is a philanthropic interest. It is very different from a focus on getting yourself preserved using existing organizations and on existing scales—though they are certainly compatible and complementary interests.
To some extent I support seeing your own preservation as self-interest, under the assumption that this means you do not deduct it from your mental bank account for charitable giving (i.e. you’ll give the same amount to starving kids and life-saving vaccines as you did before signing up). However it is a huge mistake to claim that it is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.
it is a huge mistake to claim that [one’s own cryopreservation] is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.
OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right?
If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words “huge” and “hugely” are justified—but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later.
The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years—they need to travel to Africa or what not and see the faces of the people they have helped—or at least they need to know that if they were to travel to Africa or what not, they would—and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to—their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future).
But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very thin) before the start of the intelligence explosion or they know that their enterprise is extremely inefficient philanthropy relative to x-risks reduction. Do you see any holes in my reasoning?
There are those (e.g., Carl and Nancy in these pages in the last few days and Roko in the past IIRC) who have taken the position that getting people to sign up for cryonics tends to reduce x-risks. I plan a top-level submission with my rebuttal to that position.
I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives.
Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that.
As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don’t think contraception is a crime, for example.
The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it’s not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)
I have bookmarked your comment and will reflect on it.
BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.
I wouldn’t necessarily say that it’s the most effective way to do x-risks advocacy, but it’s one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I’m looking forward to reading more on the case against from you.
I’m worried about cryonics tainting “the whole general field of thinking seriously about the future” by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention.
I’ve never heard of someone coming to LW through an interest in cryonics, though I’m sure there are a few cases.
Lots of people here understand the point of your post. Some of us think it is evil to discourage folks from doing cryonics advocacy, since it is likely the only way to save any of the billions of people that are currently dying.
Personally, I’m not a cryonics advocate. But know your audience, and if you’ve noticed that most the people around here don’t seem to understand something, it’s probably a good time to check your assumptions and see what you’ve missed.
I’m both irritated by those commentators who responded without taking the time to carefully reading my post and disappointed in myself for failing to communicate clearly. On the latter point, I’ll be revising my post as soon as I get a chance. (I’m typing from my iPod at the moment).
Yes, but many of the participants on this web site share Multifoliate’s interest in philanthropy.
I have an interest in philanthropy (and altruism in general).
I note Multi’s post can be have has a positive influence on my own personal wellbeing. I know I aren’t going to be sucked in to self destruction—the undesirable impact is suffered by others. Any effort spent countering the influence would be considered altruistic.
If you don’t have any interest in philanthropy then my post was not intended for you, and I think that it’s unfortunate that my post increased LessWrong’s noise-to-signal ratio for you.
If you have some interest in philanthropy, then I would be interested in knowing what you’re talking about when you say:
I found the normative presumptions interspersed with the text distasteful.
If you don’t have any interest in philanthropy then my post was not intended for you
Given that your argument only rules out cryonics for genuine utilitarians or altruists, it’s quite possible to have some concern for philanthropy and yet enough concern for yourself to make cryonics the rational choice. You’re playing up a false dilemma.
If you don’t have any interest in philanthropy then my post was not intended for you
I like philanthropy, and not your sermon.
I think that it’s unfortunate that my post increased LessWrong’s noise-to-signal ratio for you.
I don’t consider this post noise. It is actively bad signal. There is a universal bias that makes it difficult to counter “people should be more altruistic” claims of any kind.
If you have some interest in philanthropy, then I would be interested in knowing what you’re talking about when you say:
‘Should’ claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that ‘analytical skills’ and rational ability in general require such sacrifice.
‘Should’ claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that ‘analytical skills’ and rational ability in general require such sacrifice.
Nope, you’ve misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity. See my response to ciphergoth for my position. If there’s some part of my post that you think that I should change to clarify my position, I’m open to suggestions.
Nope, you’ve misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity.
That’s exactly what you’re saying, as far as I can tell. Are you not advocating that people should give money to charity instead of being cryopreserved? While I think charity is a good thing, I draw the line somewhere shy of committing suicide for the benefit of others.
My post is about how cryonics should be conceptualized rather than an attempt to advocate a uniform policy of how people should interact with cryonics. Again, see my response to ciphergoth. For ciphergoth, cryonics may be the right thing. I personally do not derive fuzzies from the idea of signing up for cryonics (I get my fuzzies in other ways) and I don’t think that people should expend resources trying to change this.
Perhaps, but I have not misunderstood the literal meaning of the words in the post.
Downvoted for being unnecessarily polemical.
Yet surprisingly necessary. The nearly ubiquitous pattern when people object to demands regarding charity is along the lines of “it’s just not interesting to you but for other people it is important” or “it’s noise vs signal”. People are slow to understand that it is possible to be entirely engaged with the topic and think it is bad. After all, the applause lights are all there, plain as day—how could someone miss them?
Multi also presents a misleading image of what best represents the values of most people.
You may be right, on the other hand you may be generalizing from one example. Claims that an author’s view of human values is misleading should be substantiated with evidence.
Nitpick: Individuals don’t have CEV. They have values that can be extrapolated, but the “coherent” part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
In this instance I would be comfortable using just “EV”. In general, however, I see the whole conflict resolution between agents as a process that isn’t quite so clearly delineated at the individual.
Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
Since we have no idea what that entails and what formalizations of the idea are possible, we can’t extend moral judgment to that unclear unknown hypothetical.
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
This just isn’t done, it’s like expressing a belief about the color of God’s beard.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that.
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.
You are telling me that Cryonically suspending myself is less charitable than donating the same resources to an efficient charity? Um… yes?
I don’t think this post contains a non-trivial insight. I found the normative presumptions interspersed with the text distasteful. Multi also presents a misleading image of what best represents the values of most people.
Yes, but many of the participants on this web site share Multifoliate’s interest in philanthropy. In fact, the site’s subtitle and mission statement, “refining the art of human rationality,” came about as a subgoal of the philanthropic goals of the site’s founder.
I found it a good answer to the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.
Is that belief really common around here? Though I’m inclined to make an effort to get Hitchens to sign up, I think of that effort as self-indulgence in much the same way as I’d think of such efforts for those close to me, or my own decision to sign up.
OK, maybe “common belief” is too strong. Change it to, “make sure no one here is under the illusion that cryonics advocacy is an efficient form of philanthropy, rather than a way to protect one’s own interests while meeting like-minded people and engaging in an inefficient form of philanthropy, though I personally doubt that it decreases x-risks.”
I think there are different approaches to cryonics. Advocating global or wide-scale conversion to cryonics is a philanthropic interest. It is very different from a focus on getting yourself preserved using existing organizations and on existing scales—though they are certainly compatible and complementary interests.
To some extent I support seeing your own preservation as self-interest, under the assumption that this means you do not deduct it from your mental bank account for charitable giving (i.e. you’ll give the same amount to starving kids and life-saving vaccines as you did before signing up). However it is a huge mistake to claim that it is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.
OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right?
If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words “huge” and “hugely” are justified—but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later.
The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years—they need to travel to Africa or what not and see the faces of the people they have helped—or at least they need to know that if they were to travel to Africa or what not, they would—and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to—their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future).
But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very thin) before the start of the intelligence explosion or they know that their enterprise is extremely inefficient philanthropy relative to x-risks reduction. Do you see any holes in my reasoning?
There are those (e.g., Carl and Nancy in these pages in the last few days and Roko in the past IIRC) who have taken the position that getting people to sign up for cryonics tends to reduce x-risks. I plan a top-level submission with my rebuttal to that position.
I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives.
Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that.
As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don’t think contraception is a crime, for example.
The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it’s not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)
I have bookmarked your comment and will reflect on it.
BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.
I wouldn’t necessarily say that it’s the most effective way to do x-risks advocacy, but it’s one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I’m looking forward to reading more on the case against from you.
I’m worried about cryonics tainting “the whole general field of thinking seriously about the future” by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention.
I’ve never heard of someone coming to LW through an interest in cryonics, though I’m sure there are a few cases.
You’re one of the few commentators who understands the point of my post.
Lots of people here understand the point of your post. Some of us think it is evil to discourage folks from doing cryonics advocacy, since it is likely the only way to save any of the billions of people that are currently dying.
Personally, I’m not a cryonics advocate. But know your audience, and if you’ve noticed that most the people around here don’t seem to understand something, it’s probably a good time to check your assumptions and see what you’ve missed.
This comes across as if you’re miffed at the commentators rather than at yourself—is that what you mean?
I’m both irritated by those commentators who responded without taking the time to carefully reading my post and disappointed in myself for failing to communicate clearly. On the latter point, I’ll be revising my post as soon as I get a chance. (I’m typing from my iPod at the moment).
I have an interest in philanthropy (and altruism in general).
I note Multi’s post can be have has a positive influence on my own personal wellbeing. I know I aren’t going to be sucked in to self destruction—the undesirable impact is suffered by others. Any effort spent countering the influence would be considered altruistic.
If you don’t have any interest in philanthropy then my post was not intended for you, and I think that it’s unfortunate that my post increased LessWrong’s noise-to-signal ratio for you.
If you have some interest in philanthropy, then I would be interested in knowing what you’re talking about when you say:
Given that your argument only rules out cryonics for genuine utilitarians or altruists, it’s quite possible to have some concern for philanthropy and yet enough concern for yourself to make cryonics the rational choice. You’re playing up a false dilemma.
I like philanthropy, and not your sermon.
I don’t consider this post noise. It is actively bad signal. There is a universal bias that makes it difficult to counter “people should be more altruistic” claims of any kind.
‘Should’ claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that ‘analytical skills’ and rational ability in general require such sacrifice.
The post fits my definition of ‘evil’.
Nope, you’ve misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity. See my response to ciphergoth for my position. If there’s some part of my post that you think that I should change to clarify my position, I’m open to suggestions.
Downvoted for being unnecessarily polemical.
That’s exactly what you’re saying, as far as I can tell. Are you not advocating that people should give money to charity instead of being cryopreserved? While I think charity is a good thing, I draw the line somewhere shy of committing suicide for the benefit of others.
My post is about how cryonics should be conceptualized rather than an attempt to advocate a uniform policy of how people should interact with cryonics. Again, see my response to ciphergoth. For ciphergoth, cryonics may be the right thing. I personally do not derive fuzzies from the idea of signing up for cryonics (I get my fuzzies in other ways) and I don’t think that people should expend resources trying to change this.
Perhaps, but I have not misunderstood the literal meaning of the words in the post.
Yet surprisingly necessary. The nearly ubiquitous pattern when people object to demands regarding charity is along the lines of “it’s just not interesting to you but for other people it is important” or “it’s noise vs signal”. People are slow to understand that it is possible to be entirely engaged with the topic and think it is bad. After all, the applause lights are all there, plain as day—how could someone miss them?
You may be right, on the other hand you may be generalizing from one example. Claims that an author’s view of human values is misleading should be substantiated with evidence.
“The CEV of most individuals is not Martyrdom” is not something that I consider overwhelmingly contentious.
Nitpick: Individuals don’t have CEV. They have values that can be extrapolated, but the “coherent” part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
In this instance I would be comfortable using just “EV”. In general, however, I see the whole conflict resolution between agents as a process that isn’t quite so clearly delineated at the individual.
He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
Since we have no idea what that entails and what formalizations of the idea are possible, we can’t extend moral judgment to that unclear unknown hypothetical.
I fundamentally disagree with what you are saying, and object somewhat to how you are saying it.
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
You are mistaken. Read again.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.
The singleton sets of individuals do...
I don’t think that anything in my post advocates martyrdom. What part of my post appears to you to advocate martyrdom?
To put it in the visceral language favored by cryonics advocates, you’re advocating that people commit suicide for the benefit of others.