Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Thank you for the balanced answer.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.