And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.
I didn’t realize that. Have there been surveys to establish that Robin’s view is extreme?
In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.
In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year’s Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does.
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.
When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I’m worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.
The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It’ll be even more confusing if the SIAI house moves from Benton Street...
At a workshop following last year’s Singularity Summit, every attendee expressed the > wish that brain emulation would arrive before AGI. I get the definite impression that
those wishes stems mainly from fears of hard takeoff, and not from optimism about
brain emulation per se.
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility
even when utility is defined the “popular” way rather than the rather outre way I define it.)
I didn’t realize that. Have there been surveys to establish that Robin’s view is extreme?
In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.
In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year’s Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
any other questions for me?
Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.
Hmmm… I have absolutely no knowledge of the politics involved in this, but it sounds intriguing.… could you elaborate on this a bit more?
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.
When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I’m worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.
Off-topic question: Is Benton house the same as the SIAI house? (I see that it is in the Bay Area.) Edit: Thanks Nick and Kevin!
The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It’ll be even more confusing if the SIAI house moves from Benton Street...
Yes.
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the “popular” way rather than the rather outre way I define it.)
Yes, this was a question about curiosity of the responses not in regards specifically to the issue of controlled dissemination.