All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does.
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
any other questions for me?
Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.
Hmmm… I have absolutely no knowledge of the politics involved in this, but it sounds intriguing.… could you elaborate on this a bit more?
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)