I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
I argued against this statement:
specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
I argued against this statement:
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.