Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other’s plans.
Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities.
And the most decisive personal quality I know about has to do with the “motivational structure” of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism.
A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics.
Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations.
Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity—and this might be its more important effect even if the intent of the author is simply to get people to sign up for cryonics.
Maybe I’m wrong, but I tend to think that any discussion of cryonics I can imagine which is effective at getting people to sign up will tend to recruit the wrong people into the ranks of the singularitarians.
I am struck particularly at the difficultly of getting people to read communications about cryonics without appeal to their interest in their own survival.
And I think it increases existential risk to create communications that are informative about the singularity whose appeal is to the reader’s survival interest.
People whose motivation is self-interest have a track record of being easy to control or influence by threats of imprisonment, violence, or other personal hardships. Scientific workers in particular have a track record of being easily cowed by bullies, e.g., government officials and business tycoons. Moreover, the prospective bullies tend to have goals that most people reading this would disapprove of.
There have probably not yet been any instances in which e-risks workers have been influenced or subverted by bullies, but there’s a significant chance of it happening during the second phase of the work (when it will become obvious to any intelligent observer that the singularitarians command significant scientific and technical resources) if a significant fraction of singularitarians will have self-interested as their strongest motivation for persisting in their work towards the singularity.
I take my personal history as another piece of evidence for the hypothesis that people should discourage knowledge of the singularity and work on the singularity by people whose motivation for doing so is self-interest. When I was growing up in the 1970s, one of the books in my house, Get Ready for Immortality by Lawrence Lamb, M.D., caused me come to hope to live millions of years. It was not a major theme in my thinking, but it was definitely there. And I do not consider the person I was in my teens (or even my early 20s) to be the sort of person that should be encouraged to learn about and work toward the singularity.
By the time I came upon Eliezer’s writings when I was 40 or 41, I had lost any desire to live millions of years. My motivation for trying to understand Eliezer’s point about seed AI was to help me predict whether the research I was doing on new programming languages might have far-reaching negative consequences (by giving AGI researchers tools they might otherwise not come to have). (If they did have negative consequences, I meant to leave them unpublished.) I.e., my motivation was to avoid doing harm by not thinking through the consequences of some research I had done for the pleasure of scientific discovery.
And that is an example of the kind of motivation that should IMHO be encouraged. And note that it is quite easy to write in such a way that is boring to the hypothetical reader who is motivated only by maximizing the odds of his or her own survival and interesting to the hypothetical reader who is interested only in learning enough about the likely consequences of his or her behavior (and the behavior of his or her friends and coworkers) to make sure he or she is not unknowingly doing harm.
So, that is a little bit about why I tend not to think that persuading people to sign up for cryonics reduces existential risk. Yes, I see the same positive effects as JGWeissman, angry parsley and Eliezer. But I estimate the negative effects I have just described to outweigh them.
Getting people to sign up for cryonics is certainly not a strategy I would choose if my gaol was to minimize existential risks. (I.e., there are almost certainly better strategies.)