cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics. [ADDED: I concede that what I just said does not make sense; I went to delete it a few minutes after I submitted it but people had already replied. Please do not reply to this.] In other words, it provides a disincentive to pursue a strategy that discourages or suppresses existentially-risky research (on, e.g., AGI) so that less-risky research represents a larger share of the total research. In other words, it recruits the people most able to understand and to respond effectively to existential risks to spend (collectively) many millions of dollars in such a way that gives them a personal disincentive to pursue what I consider a very worthwhile strategy for addressing existential risks posed by certain lines of scientific research.
Most people who have expressed an opinion seem to believe that there is no stopping or slowing down significantly lines of research that (like AGI) can be continued with just a PC and access to the open scientific literature. But I tend to think it can be stopped or slowed down a great deal if effective people put as much effort into explaining why it is bad as Eliezer and his followers are putting into convincing people to sign up for cryonics.
According to my models, convincing people to sign up for cryonics at the current time does nothing to reduce existential risk. The opposite, in fact.
As far as I can tell, your argument supports the reverse of your conclusion: People signed up for cryonics have less incentive to do fast, risky things. This line of reasoning is sufficiently strange that I call motivated cognition on this one.
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)
We agree that getting people to sign up for cryonics increases their hope for post-singularity existence and thus their likelihood to support singularity-directed research, notwithstanding it doesn’t require a singularity to revive a frozen near-defunct body or brain.
Whether that’s good or bad depends on your view of whether widespread efforts intending to reach a good singularity are likely to go disastrously wrong. Clearly, in case of widespread popularization of the goal, an enlightened FAI research program needs to spend effort on PR in order to steal funds from more sloppy aspirants. Considering all that, I expect widespread interest and funding for AI research to give only a change in the date, not the quality, of any singularity.
Well, what’s my personal motivation, then, if I am engaging in motivated cognition?
But I do concede that my comment has a big problem here: “provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics” and I would have deleted my comment had you not replied already. Give me a few minutes to try to reconstruct the thinking that led to my conclusion.
One part is that getting people who are living now to hope to live a very long time disincentivizes them to consider strategies in which the singularity happens after they die.
Based on your premises, don’t you mean the opposite of everything you just said?
If people are frozen we can take as much time as we need. If they age and die then we have an incentive to work faster. (Although if you do the math, the current world population is insignificant compared to the potential future of humanity, so cautiousness should win out either way.)
My comment pointed out how cryonics creates a personal selfish reason to care about the future. I’d like for people to base their decisions on altruism, but the fact is that we’re only human.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics.
What?! If I do something to increase my chances of being revived given a positive Singularity after my death, then I should be more willing to pursue a strategy that increases the chances of an eventual Singularity being positive at the expense of the chances of a fast Singularity which would occur before my death.
Cryonics, be increasing the time we can wait, reduces the pain of delay.
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics. [ADDED: I concede that what I just said does not make sense; I went to delete it a few minutes after I submitted it but people had already replied. Please do not reply to this.] In other words, it provides a disincentive to pursue a strategy that discourages or suppresses existentially-risky research (on, e.g., AGI) so that less-risky research represents a larger share of the total research. In other words, it recruits the people most able to understand and to respond effectively to existential risks to spend (collectively) many millions of dollars in such a way that gives them a personal disincentive to pursue what I consider a very worthwhile strategy for addressing existential risks posed by certain lines of scientific research.
Most people who have expressed an opinion seem to believe that there is no stopping or slowing down significantly lines of research that (like AGI) can be continued with just a PC and access to the open scientific literature. But I tend to think it can be stopped or slowed down a great deal if effective people put as much effort into explaining why it is bad as Eliezer and his followers are putting into convincing people to sign up for cryonics.
According to my models, convincing people to sign up for cryonics at the current time does nothing to reduce existential risk. The opposite, in fact.
As far as I can tell, your argument supports the reverse of your conclusion: People signed up for cryonics have less incentive to do fast, risky things. This line of reasoning is sufficiently strange that I call motivated cognition on this one.
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)
We agree that getting people to sign up for cryonics increases their hope for post-singularity existence and thus their likelihood to support singularity-directed research, notwithstanding it doesn’t require a singularity to revive a frozen near-defunct body or brain.
Whether that’s good or bad depends on your view of whether widespread efforts intending to reach a good singularity are likely to go disastrously wrong. Clearly, in case of widespread popularization of the goal, an enlightened FAI research program needs to spend effort on PR in order to steal funds from more sloppy aspirants. Considering all that, I expect widespread interest and funding for AI research to give only a change in the date, not the quality, of any singularity.
Well, what’s my personal motivation, then, if I am engaging in motivated cognition?
But I do concede that my comment has a big problem here: “provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics” and I would have deleted my comment had you not replied already. Give me a few minutes to try to reconstruct the thinking that led to my conclusion.
One part is that getting people who are living now to hope to live a very long time disincentivizes them to consider strategies in which the singularity happens after they die.
But there was another part ISTR.
Based on your premises, don’t you mean the opposite of everything you just said?
If people are frozen we can take as much time as we need. If they age and die then we have an incentive to work faster. (Although if you do the math, the current world population is insignificant compared to the potential future of humanity, so cautiousness should win out either way.)
My comment pointed out how cryonics creates a personal selfish reason to care about the future. I’d like for people to base their decisions on altruism, but the fact is that we’re only human.
What?! If I do something to increase my chances of being revived given a positive Singularity after my death, then I should be more willing to pursue a strategy that increases the chances of an eventual Singularity being positive at the expense of the chances of a fast Singularity which would occur before my death.
Cryonics, be increasing the time we can wait, reduces the pain of delay.