But that is an absurd task, because if you don’t understand algebra, you certainly won’t be discovering differentiation. Attempting to “discover differential equations before anyone else has discovered algebra” doesn’t mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE’s.
It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)
But that is an absurd task, because if you don’t understand algebra, you certainly won’t be discovering differentiation. Attempting to “discover differential equations before anyone else has discovered algebra” doesn’t mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE’s.
To clarify: ‘Anna Salamon once described the Singularity Institute’s task as to “discover differential equations before anyone who isn’t concerned with friendliness has discovered algebra”.’
Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn’t even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it’s going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?
I apologize for being snarky, but I can’t help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
Last I checked, Robin Hanson put probability of hard takeoff at less than 1%.
And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.
Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI?
Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?
If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.
Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?
I don’t buy that that’s a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it’s both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern.
People here are fond of saying “people are crazy, the world is mad,” which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.
friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity
I agree, which is why I wrote, “SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI”. If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own.
In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it—but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes.
Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an “extrapolation” that gives equal weight to the wishes or “volition” of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.
And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.
I didn’t realize that. Have there been surveys to establish that Robin’s view is extreme?
In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.
In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year’s Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does.
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.
When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I’m worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.
The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It’ll be even more confusing if the SIAI house moves from Benton Street...
At a workshop following last year’s Singularity Summit, every attendee expressed the > wish that brain emulation would arrive before AGI. I get the definite impression that
those wishes stems mainly from fears of hard takeoff, and not from optimism about
brain emulation per se.
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility
even when utility is defined the “popular” way rather than the rather outre way I define it.)
For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I’ve talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven’t read enough of your writings or attended your events seems a bit biased to me.
“The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. They also reviewed efforts to develop principles for guiding the behavior of autonomous and semi-autonomous systems. Some of the prior and ongoing research on the latter can be viewed by people familiar with Isaac Asimov’s Robot Series as formalization and study of behavioral controls akin to Asimov’s Laws of Robotics. There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems.”
If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.
I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett’s probability.
But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven’t read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn’t an AGI amateur like you.
I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just “AGI ameteurs”). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.
Now that you mention it, I didn’t have any opinion about whether Eliezar et al had secret ideas about AI.
My tentative assumption is that they hadn’t gotten far enough to have anything worth keeping secret, but this is completely a guess based on very little.
(Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
If the probability of hard takeoff was 0.1%, it’s still too high a probability for me to want there to be public discussion of how one might build an AI.
Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
But that is an absurd task, because if you don’t understand algebra, you certainly won’t be discovering differentiation. Attempting to “discover differential equations before anyone else has discovered algebra” doesn’t mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE’s.
It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)
To clarify: ‘Anna Salamon once described the Singularity Institute’s task as to “discover differential equations before anyone who isn’t concerned with friendliness has discovered algebra”.’
Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn’t even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it’s going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?
I apologize for being snarky, but I can’t help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.
Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?
If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.
I don’t buy that that’s a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it’s both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern.
People here are fond of saying “people are crazy, the world is mad,” which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.
I agree, which is why I wrote, “SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI”. If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own.
In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it—but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes.
Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an “extrapolation” that gives equal weight to the wishes or “volition” of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.
I didn’t realize that. Have there been surveys to establish that Robin’s view is extreme?
In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.
In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.
All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year’s Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.
I’m not convinced that “full-time employees and volunteers of SIAI” are representative of “writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer”, even when weighted by level of rationality.
I’m under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?
ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett’s opinions about hard take-off. (But I’d rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about “hard take-off” specifically.)
Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)
Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI’s area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI’s prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person’s game.)
Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)
any other questions for me?
Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.
Hmmm… I have absolutely no knowledge of the politics involved in this, but it sounds intriguing.… could you elaborate on this a bit more?
BTW I have added a sentence of clarification to my comment.
All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one’s value as a person nor the result of the operation of “politics”.
Nope, that’s all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.
When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I’m worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.
Off-topic question: Is Benton house the same as the SIAI house? (I see that it is in the Bay Area.) Edit: Thanks Nick and Kevin!
The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It’ll be even more confusing if the SIAI house moves from Benton Street...
Yes.
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the “popular” way rather than the rather outre way I define it.)
Yes, this was a question about curiosity of the responses not in regards specifically to the issue of controlled dissemination.
For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I’ve talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven’t read enough of your writings or attended your events seems a bit biased to me.
“The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. They also reviewed efforts to develop principles for guiding the behavior of autonomous and semi-autonomous systems. Some of the prior and ongoing research on the latter can be viewed by people familiar with Isaac Asimov’s Robot Series as formalization and study of behavioral controls akin to Asimov’s Laws of Robotics. There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems.”
Hi Robin!
If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.
I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett’s probability.
But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven’t read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn’t an AGI amateur like you.
I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just “AGI ameteurs”). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.
About advanced AI being developed, extremely rapid economic growth upon development, or local gains?
Now that you mention it, I didn’t have any opinion about whether Eliezar et al had secret ideas about AI.
My tentative assumption is that they hadn’t gotten far enough to have anything worth keeping secret, but this is completely a guess based on very little.
Lots of guesswork.
If the probability of hard takeoff was 0.1%, it’s still too high a probability for me to want there to be public discussion of how one might build an AI.
http://www.nickbostrom.com/astronomical/waste.html