But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky’s specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let’s assume that is correct and if all people who follow MIRI’s approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn’t blogs like LessWrong or books like that of N.Bostrom’s actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn’t the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don’t have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener’s mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
I argued against this statement:
specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let’s assume that is correct and if all people who follow MIRI’s approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn’t blogs like LessWrong or books like that of N.Bostrom’s actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn’t the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don’t have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener’s mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
I argued against this statement:
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.