I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.
Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.
Good was involved in the very early end of computers, so it is a bit hard for him to have done modern AI work. But the work he did do was pretty impressive. He did cryptography work in World War II with Alan Turing, and both during and after the war worked on both theoretical and practical computer systems. He did a lot of probability work, much of which is in some form or another used today in a variety of fields including AI. For example, look at the Good-Turing estimator.
Schmidhuber did some of the first work on practical genetic algorithms and did very important work on neural nets.
Warwick has done so much work in AI and robotics that listing them all would take a long time. One can argue that most of it hasn’t gone outside the lab, but it is clear that the much of that work is practically useful even if it is not yet economically feasible to use it on a large scale (which frankly is the status of most AI research at this point in general).
Overall, I don’t think your characterization is accurate, although your point that the total set of AI researchers with such concerns being a small percentage of all researchers seems valid.
Good was involved in the very early end of computers, so it is a bit hard for him to have done modern AI work. But the work he did do was pretty impressive. He did cryptography work in World War II with Alan Turing, and both during and after the war worked on both theoretical and practical computer systems. He did a lot of probability work, much of which is in some form or another used today in a variety of fields including AI. For example, look at the Good-Turing estimator.
Schmidhuber did some of the first work on practical genetic algorithms and did very important work on neural nets.
Warwick has done so much work in AI and robotics that listing them all would take a long time. One can argue that most of it hasn’t gone outside the lab, but it is clear that the much of that work is practically useful even if it is not yet economically feasible to use it on a large scale (which frankly is the status of most AI research at this point in general).
Overall, I don’t think your characterization is accurate, although your point that the total set of AI researchers with such concerns being a small percentage of all researchers seems valid.