There’s simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.
Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.
Anyway, I think I might write some experts and all of the people mentioned in my post, if I’m not too lazy.
I’ve already got one reply, whom I’m not going to name right now. But let’s first consider Yudkowsky’ attitude of adressing other people:
You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong...
Now the first of those people I contacted about it:
There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.
Read Moral Machines for current state of the art thinking on how to build a moral machine mind.
SIAI dogma makes sense if you ignore the uncertainties at every step of their logic. It’s like assigning absolute numbers to every variable in the Drake equation and determining that aliens must be all around us in the solar system, and starting a church on the idea that we are being observed by spaceships hidden on the dark side of the moon. In other words, religious thinking wrapped up to look like rationality.
ETA
I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don’t seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I have some sympathy for your remark.
The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.)
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough.
I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.
Two points to make here:
(i) Though there’s huge uncertainty in judging these sorts of things and I’m by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I’ve written about this in various comments, for example here, here and here.
(ii) I’ve thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.
I agree that ‘poisoning the meme’ is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I’ll be interested to hear your analysis when it’s ready. [Edit: apparently the analysis was about asteroids, not reputation.]
Here’s the Fidelity Charitable Gift Fund for Americans. I’m skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.
read Moral Machines for current state of the art thinking on how to build a moral machine mind.
It’s hardly that. Moral Machines is basically a survey; it doesn’t go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.
And Eliezer is one of the people it mentions, so I’m not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)
To follow up on this, Wendell specifically mentions EY’s “friendly AI” in the intro to his new article in the Ethics and Information Technology special issue on “Robot ethics and human ethics”.
[...] many reasons to doubt [...] belief system of a cult [...] haphazard musings of a high school dropout [...] never written a single computer program [...] professes to be an expert [...] crying chicken little [...] only a handful take the FAI idea seriously.
[...] dogma [...] ignore the uncertainties at every step [...] starting a church [...] religious thinking wrapped up to look like rationality.
I am unable to take this criticism seriously. It’s just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?
Edit: And I’m downvoted. You actually think a reply that’s 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.
The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn’t tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it’s also consistent with having some damning arguments.
Of course, to know either way, we would have to hear this person’s actual arguments, which we haven’t, in this case.
How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview?
Just because a certain topic is raised, doesn’t mean that it is discussed correctly.
How is a fiercely atheist group religious at all?
The argument is that their thinking has some similarities to religion. It’s a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.
How is it a cult (there are lots of posts about this in the LessWrong archive)?
The fact that EY displays an awareness of cultish dynamics doesn’t necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer’s discussion that “every cause wants to become a cult,” and I don’t like the common practice of labeled movements as “cults.” The net for “cult” is being drawn far too widely.
Yet I wouldn’t say that the use of the word “cult” means that the individual is engaging in bad reasoning. While I think “cult” is generally a misnomer, it’s generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.
We would need to hear this individual’s actual arguments to be able to evaluate whether the polemical summary is well-founded.
P.S. I wasn’t the one who downvoted you.
Edit:
high school dropout, who has never written a single computer program
I don’t know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally (“Hello World” is a program). If Eliezer isn’t a high school dropout, and has written major applications, then the credibility of this writer is lowered.
I believe you weren’t supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI’s yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.
The END OF THE WORLD acts as a superstimulus to human fear mechanisms—and causes caring people rush to warn their friends of the impending DOOM—spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy—and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.
The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.
There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here—that the earth will be engulfed when the sun becomes a red giant.
That fate for the planet haunted me when I was a kid. People would say “But that’s billions of years in the future” and I’d feel as though they were missing the point. It’s possible that a more detailed discussion would have helped....
Recently, I’ve read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.
This seems less intellectually honest than “The human race will be long gone anyway”, but not awful. I think the most meticulous answer (aside from “that’s the far future and there’s nothing to be done about it now”) is “that’s so far in the future that we don’t know whether people will be around, but if they are, they may well find a solution.”
[1] I count this as evidence for the Flynn Effect.
Re: “haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI.”
This opinion sounds poorly researched—e.g.: “This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.”—http://yudkowsky.net/obsolete/plan.html
I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn’t worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.
I don’t think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can’t.
He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.
EDIT: This is not meant to be a defense of obvious wrong hyperbole like “has never written a single computer program”.
Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:
This isn’t contrary to Robin’s post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.
Absence of evidence is not evidence of absence?
There’s simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.
Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.
Anyway, I think I might write some experts and all of the people mentioned in my post, if I’m not too lazy.
I’ve already got one reply, whom I’m not going to name right now. But let’s first consider Yudkowsky’ attitude of adressing other people:
Now the first of those people I contacted about it:
ETA
I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don’t seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I have some sympathy for your remark.
The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.)
I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).
Two points to make here:
(i) Though there’s huge uncertainty in judging these sorts of things and I’m by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I’ve written about this in various comments, for example here, here and here.
(ii) I’ve thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.
I agree that ‘poisoning the meme’ is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I’ll be interested to hear your analysis when it’s ready. [Edit: apparently the analysis was about asteroids, not reputation.]
Here’s the Fidelity Charitable Gift Fund for Americans. I’m skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.
It’s hardly that. Moral Machines is basically a survey; it doesn’t go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.
And Eliezer is one of the people it mentions, so I’m not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)
To follow up on this, Wendell specifically mentions EY’s “friendly AI” in the intro to his new article in the Ethics and Information Technology special issue on “Robot ethics and human ethics”.
I am unable to take this criticism seriously. It’s just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?
Edit: And I’m downvoted. You actually think a reply that’s 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.
The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn’t tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it’s also consistent with having some damning arguments.
Of course, to know either way, we would have to hear this person’s actual arguments, which we haven’t, in this case.
Just because a certain topic is raised, doesn’t mean that it is discussed correctly.
The argument is that their thinking has some similarities to religion. It’s a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.
The fact that EY displays an awareness of cultish dynamics doesn’t necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer’s discussion that “every cause wants to become a cult,” and I don’t like the common practice of labeled movements as “cults.” The net for “cult” is being drawn far too widely.
Yet I wouldn’t say that the use of the word “cult” means that the individual is engaging in bad reasoning. While I think “cult” is generally a misnomer, it’s generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.
We would need to hear this individual’s actual arguments to be able to evaluate whether the polemical summary is well-founded.
P.S. I wasn’t the one who downvoted you.
Edit:
I don’t know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally (“Hello World” is a program). If Eliezer isn’t a high school dropout, and has written major applications, then the credibility of this writer is lowered.
I believe you weren’t supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI’s yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.
Re: “How is it a cult?”
It looks a lot like an END OF THE WORLD cult. That is a well-known subspecies of cult—e.g. see:
http://en.wikipedia.org/wiki/Doomsday_cult
“The End of the World Cult”
http://www.youtube.com/watch?v=-3uDmyGq8Ok
The END OF THE WORLD acts as a superstimulus to human fear mechanisms—and causes caring people rush to warn their friends of the impending DOOM—spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy—and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.
The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.
My “DOOM” video has more—http://www.youtube.com/watch?v=kH31AcOmSjs
Slight sidetrack:
There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here—that the earth will be engulfed when the sun becomes a red giant.
That fate for the planet haunted me when I was a kid. People would say “But that’s billions of years in the future” and I’d feel as though they were missing the point. It’s possible that a more detailed discussion would have helped....
Recently, I’ve read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.
This seems less intellectually honest than “The human race will be long gone anyway”, but not awful. I think the most meticulous answer (aside from “that’s the far future and there’s nothing to be done about it now”) is “that’s so far in the future that we don’t know whether people will be around, but if they are, they may well find a solution.”
[1] I count this as evidence for the Flynn Effect.
Downvoted for this.
Re: “haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI.”
This opinion sounds poorly researched—e.g.: “This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.”—http://yudkowsky.net/obsolete/plan.html
I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn’t worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.
I don’t think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can’t.
He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.
EDIT: This is not meant to be a defense of obvious wrong hyperbole like “has never written a single computer program”.
Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:
http://flarelang.sourceforge.net/
This isn’t contrary to Robin’s post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.