Isn’t it more sane to donate money to organizations fighting against existential risks rather than spending money on-
Yes. Your argument applies to everything money can be spent on, not just cryonics. But unlike most things you can spend money on, cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.
cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics. [ADDED: I concede that what I just said does not make sense; I went to delete it a few minutes after I submitted it but people had already replied. Please do not reply to this.] In other words, it provides a disincentive to pursue a strategy that discourages or suppresses existentially-risky research (on, e.g., AGI) so that less-risky research represents a larger share of the total research. In other words, it recruits the people most able to understand and to respond effectively to existential risks to spend (collectively) many millions of dollars in such a way that gives them a personal disincentive to pursue what I consider a very worthwhile strategy for addressing existential risks posed by certain lines of scientific research.
Most people who have expressed an opinion seem to believe that there is no stopping or slowing down significantly lines of research that (like AGI) can be continued with just a PC and access to the open scientific literature. But I tend to think it can be stopped or slowed down a great deal if effective people put as much effort into explaining why it is bad as Eliezer and his followers are putting into convincing people to sign up for cryonics.
According to my models, convincing people to sign up for cryonics at the current time does nothing to reduce existential risk. The opposite, in fact.
As far as I can tell, your argument supports the reverse of your conclusion: People signed up for cryonics have less incentive to do fast, risky things. This line of reasoning is sufficiently strange that I call motivated cognition on this one.
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)
We agree that getting people to sign up for cryonics increases their hope for post-singularity existence and thus their likelihood to support singularity-directed research, notwithstanding it doesn’t require a singularity to revive a frozen near-defunct body or brain.
Whether that’s good or bad depends on your view of whether widespread efforts intending to reach a good singularity are likely to go disastrously wrong. Clearly, in case of widespread popularization of the goal, an enlightened FAI research program needs to spend effort on PR in order to steal funds from more sloppy aspirants. Considering all that, I expect widespread interest and funding for AI research to give only a change in the date, not the quality, of any singularity.
Well, what’s my personal motivation, then, if I am engaging in motivated cognition?
But I do concede that my comment has a big problem here: “provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics” and I would have deleted my comment had you not replied already. Give me a few minutes to try to reconstruct the thinking that led to my conclusion.
One part is that getting people who are living now to hope to live a very long time disincentivizes them to consider strategies in which the singularity happens after they die.
Based on your premises, don’t you mean the opposite of everything you just said?
If people are frozen we can take as much time as we need. If they age and die then we have an incentive to work faster. (Although if you do the math, the current world population is insignificant compared to the potential future of humanity, so cautiousness should win out either way.)
My comment pointed out how cryonics creates a personal selfish reason to care about the future. I’d like for people to base their decisions on altruism, but the fact is that we’re only human.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics.
What?! If I do something to increase my chances of being revived given a positive Singularity after my death, then I should be more willing to pursue a strategy that increases the chances of an eventual Singularity being positive at the expense of the chances of a fast Singularity which would occur before my death.
Cryonics, be increasing the time we can wait, reduces the pain of delay.
Since most people who donate to fight existential risks don’t donate everything they have above subsistance level, there’s usually enough money to do both (since Cryonics via life insurance isn’t very expensive afaik).
But surely you wouldn’t be donating enough to, say, fighting existential risks so that the marginal utility of the next dollar spent there drops below that of the marginal utility spent on cryonics. Not that I’m suggesting that fighting existential risks necessarily has a higher marginal utility than cryonics. Rather, you probably don’t have enough money to change the relative rankings, so you should donate to the cause with the highest marginal utility. Not both.
The exception may be donating enough to make sure YOU are reanimated after you die (I don’t know what your utility function looks like), but in that case you aren’t really donating.
Surely you should be asking about the marginal utility of money spent on eating out before you ask about money spent on cryonics. What is this strange mental accounting where money spent on cryonics is immediately available to be redirected to existential risks, but money spent on burritos or French restaurants or an extra 100sqft in an apartment is not?
I have a theory about this, actually. How it works is: people get paid at the beginning of the month, and then pay their essential bills, food, rent, electricity, insurance, Internet, etc. What happens next is, people have a certain standard of living that they think they’re supposed to have, based somewhat on how much money they make, but much more on what all their friends are spending money on. They then go out and buy stuff like fancy dinners and a house in the suburbs and what not, and this spending is not mentally available as something that can be cut back on, because they don’t see it as “spending”, so much as “things I need to do to maintain my standard of living”; people see it as a much larger burden to write a single check for $2,000 than to spend $7 every day on coffee, because they come out of different mental pools. Anything left over after that gets put into cryonics, or existential risk, or savings, or investments, etc. That’s why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.
“A man does not ‘by nature’ wish to earn more and more, but simply to live as he is accustomed to live and earn as much is necessary for that purpose… & a people only work because and so long as they are poor.”
--Max Weber, Protestant Ethic
That’s why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.
That’s why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.
We do?
I was going to comment on that, but I don’t see any millionaires at all, so I thought I shouldn’t.
It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn’t just giving a lot of anecdotes?
I would be very surprised to hear that, because it implies that one is substantially more likely to become a millionaire by plumbing than by lawyering, since there are ~500,000 plumbers in the US and >1.1million lawyers.
It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn’t just giving a lot of anecdotes?
According to wikipedia it (1) generally cites statistics and (2) says that doctors, lawyers, and accountants save a much lower proportion of money than other occupations. google books says that it doesn’t mention plumbers at all.
I would guess that pretty much all lawyers permanently employed at BIGLAW are millionaires and pretty much no other lawyers are; but that’s probably enough to beat plumbers. I think the other lawyers have a similar income distribution to plumbers.
That seems natural enough to me, it’s the net income of the very limited part of you that identifies as “you” because it can sometimes talk and think about abstractions.
On the one hand, yes, but on the other hand, I sometimes worry that we’re getting a little too cynical around these Hansonian parts.
In any case, cryonics is a one-time expenditure for that part of you. It looms large in the imagination in advance, but afterward the expenditure almost instantly fades into the background of the monthly rent, less salient than burritos.
The deliberative part of “you” that thinks about cryonics may not be the same part that chooses restaurants, but doesn’t it play a role in choosing apartments?
Agreed, but the deliberative part may actually think that the larger and better located apartment contributes more to global utility, at least if you are the head of the Singularity Institute and you just spent the last 6 years living with a wife in 200 square feet.
If you have a sufficiently selfish utility function, it may make sense to spend that extra money on french restaurants and the bigger apartment. But otherwise, yes, the lowest hanging fruit are spending less money on things like going out or new electronic toys.
It occurs to me to suggest that donating to both allows you to hedge your bets; one or the other might end up not producing results at all.
Which seems to be a similar impulse to the one causing guess 70% blue and 30% red, though the situation is different enough that it might make sense here.
Isin’t it more sane to donate money to organizations fighting against existential risks rather than spending money on cryonics?
Yes. Your argument applies to everything money can be spent on, not just cryonics. But unlike most things you can spend money on, cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics. [ADDED: I concede that what I just said does not make sense; I went to delete it a few minutes after I submitted it but people had already replied. Please do not reply to this.] In other words, it provides a disincentive to pursue a strategy that discourages or suppresses existentially-risky research (on, e.g., AGI) so that less-risky research represents a larger share of the total research. In other words, it recruits the people most able to understand and to respond effectively to existential risks to spend (collectively) many millions of dollars in such a way that gives them a personal disincentive to pursue what I consider a very worthwhile strategy for addressing existential risks posed by certain lines of scientific research.
Most people who have expressed an opinion seem to believe that there is no stopping or slowing down significantly lines of research that (like AGI) can be continued with just a PC and access to the open scientific literature. But I tend to think it can be stopped or slowed down a great deal if effective people put as much effort into explaining why it is bad as Eliezer and his followers are putting into convincing people to sign up for cryonics.
According to my models, convincing people to sign up for cryonics at the current time does nothing to reduce existential risk. The opposite, in fact.
As far as I can tell, your argument supports the reverse of your conclusion: People signed up for cryonics have less incentive to do fast, risky things. This line of reasoning is sufficiently strange that I call motivated cognition on this one.
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)
We agree that getting people to sign up for cryonics increases their hope for post-singularity existence and thus their likelihood to support singularity-directed research, notwithstanding it doesn’t require a singularity to revive a frozen near-defunct body or brain.
Whether that’s good or bad depends on your view of whether widespread efforts intending to reach a good singularity are likely to go disastrously wrong. Clearly, in case of widespread popularization of the goal, an enlightened FAI research program needs to spend effort on PR in order to steal funds from more sloppy aspirants. Considering all that, I expect widespread interest and funding for AI research to give only a change in the date, not the quality, of any singularity.
Well, what’s my personal motivation, then, if I am engaging in motivated cognition?
But I do concede that my comment has a big problem here: “provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics” and I would have deleted my comment had you not replied already. Give me a few minutes to try to reconstruct the thinking that led to my conclusion.
One part is that getting people who are living now to hope to live a very long time disincentivizes them to consider strategies in which the singularity happens after they die.
But there was another part ISTR.
Based on your premises, don’t you mean the opposite of everything you just said?
If people are frozen we can take as much time as we need. If they age and die then we have an incentive to work faster. (Although if you do the math, the current world population is insignificant compared to the potential future of humanity, so cautiousness should win out either way.)
My comment pointed out how cryonics creates a personal selfish reason to care about the future. I’d like for people to base their decisions on altruism, but the fact is that we’re only human.
What?! If I do something to increase my chances of being revived given a positive Singularity after my death, then I should be more willing to pursue a strategy that increases the chances of an eventual Singularity being positive at the expense of the chances of a fast Singularity which would occur before my death.
Cryonics, be increasing the time we can wait, reduces the pain of delay.
Since most people who donate to fight existential risks don’t donate everything they have above subsistance level, there’s usually enough money to do both (since Cryonics via life insurance isn’t very expensive afaik).
But surely you wouldn’t be donating enough to, say, fighting existential risks so that the marginal utility of the next dollar spent there drops below that of the marginal utility spent on cryonics. Not that I’m suggesting that fighting existential risks necessarily has a higher marginal utility than cryonics. Rather, you probably don’t have enough money to change the relative rankings, so you should donate to the cause with the highest marginal utility. Not both.
The exception may be donating enough to make sure YOU are reanimated after you die (I don’t know what your utility function looks like), but in that case you aren’t really donating.
Surely you should be asking about the marginal utility of money spent on eating out before you ask about money spent on cryonics. What is this strange mental accounting where money spent on cryonics is immediately available to be redirected to existential risks, but money spent on burritos or French restaurants or an extra 100sqft in an apartment is not?
I have a theory about this, actually. How it works is: people get paid at the beginning of the month, and then pay their essential bills, food, rent, electricity, insurance, Internet, etc. What happens next is, people have a certain standard of living that they think they’re supposed to have, based somewhat on how much money they make, but much more on what all their friends are spending money on. They then go out and buy stuff like fancy dinners and a house in the suburbs and what not, and this spending is not mentally available as something that can be cut back on, because they don’t see it as “spending”, so much as “things I need to do to maintain my standard of living”; people see it as a much larger burden to write a single check for $2,000 than to spend $7 every day on coffee, because they come out of different mental pools. Anything left over after that gets put into cryonics, or existential risk, or savings, or investments, etc. That’s why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.
--Max Weber, Protestant Ethic
We do?
I was going to comment on that, but I don’t see any millionaires at all, so I thought I shouldn’t.
The main point of “the Millionaire Next Door” is that you might not notice millionaires.
See The Millionaire Next Door, http://www.amazon.com/Millionaire-Next-Door-Thomas-Stanley/dp/0671015206 .
It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn’t just giving a lot of anecdotes?
I would be very surprised to hear that, because it implies that one is substantially more likely to become a millionaire by plumbing than by lawyering, since there are ~500,000 plumbers in the US and >1.1million lawyers.
According to wikipedia it (1) generally cites statistics and (2) says that doctors, lawyers, and accountants save a much lower proportion of money than other occupations. google books says that it doesn’t mention plumbers at all.
I would guess that pretty much all lawyers permanently employed at BIGLAW are millionaires and pretty much no other lawyers are; but that’s probably enough to beat plumbers. I think the other lawyers have a similar income distribution to plumbers.
That seems natural enough to me, it’s the net income of the very limited part of you that identifies as “you” because it can sometimes talk and think about abstractions.
On the one hand, yes, but on the other hand, I sometimes worry that we’re getting a little too cynical around these Hansonian parts.
In any case, cryonics is a one-time expenditure for that part of you. It looms large in the imagination in advance, but afterward the expenditure almost instantly fades into the background of the monthly rent, less salient than burritos.
Cynicism is boring. Build a map that matches the territory. That map looks terribly Hansonian but doesn’t have its ‘cynical’ bit set to ‘yes’.
The deliberative part of “you” that thinks about cryonics may not be the same part that chooses restaurants, but doesn’t it play a role in choosing apartments?
Agreed, but the deliberative part may actually think that the larger and better located apartment contributes more to global utility, at least if you are the head of the Singularity Institute and you just spent the last 6 years living with a wife in 200 square feet.
If you have a sufficiently selfish utility function, it may make sense to spend that extra money on french restaurants and the bigger apartment. But otherwise, yes, the lowest hanging fruit are spending less money on things like going out or new electronic toys.
It occurs to me to suggest that donating to both allows you to hedge your bets; one or the other might end up not producing results at all.
Which seems to be a similar impulse to the one causing guess 70% blue and 30% red, though the situation is different enough that it might make sense here.