Sorry, can’t parse. Are you making any substantive argument? What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there’s more of it, or is there a qualitative difference?
Time, mental energy, focus. I cannot work two jobs and do justice to either of them.
I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let’s try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I’m sorry, was that work actually important enough to deserve a real effort or something?
Sorry, I didn’t mean to insult you. Also I didn’t downvote your comment, someone else did.
What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...
A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There’s plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere.
However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money?
Setting up a Relativity Foundation is a harder job than being a patent clerk.
What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something?
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.
Sorry, can’t parse. Are you making any substantive argument? What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there’s more of it, or is there a qualitative difference?
Time, mental energy, focus. I cannot work two jobs and do justice to either of them.
I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let’s try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I’m sorry, was that work actually important enough to deserve a real effort or something?
Sorry, I didn’t mean to insult you. Also I didn’t downvote your comment, someone else did.
What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...
A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There’s plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Thank you for the balanced answer.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere.
However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
Setting up a Relativity Foundation is a harder job than being a patent clerk.
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.