Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
Thank you for the balanced answer.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.