SIAI has a big Human Resources problem. Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn’t immediately set out to do something really, really stupid. So he’s blogging and writing a book on rationality in the hope of finding someone worthwhile to work with.
Michael Vassar is much, much better at the H.R. thing. We still have H.R. problems but could now actually expand at a decent clip given more funding.
Unless you’re talking about directly working on the core FAI problem, in which case, yes, we have a huge H.R. problem. Phrasing above might sound somewhat misleading; it’s not that I hired people for A.I. research but they failed at once, or that I couldn’t find anyone above the level of basic stupid failures. Rather that it takes a lot more than “beyond the basic stupid failures” to avoid clever failures and actually get stuff done, and the basic stupid failures give you some idea of the baseline level of competence beyond which we need some number of sds.
Yeah, sorry for phrasing it wrong. I guess I should have said
Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn’t immediately suggest something really, really stupid when told about what they were working on.
And yes, I did mean that you had trouble finding people to work directly on the core FAI problem.
Short answer to the original post:
SIAI has a big Human Resources problem. Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn’t immediately set out to do something really, really stupid. So he’s blogging and writing a book on rationality in the hope of finding someone worthwhile to work with.
Michael Vassar is much, much better at the H.R. thing. We still have H.R. problems but could now actually expand at a decent clip given more funding.
Unless you’re talking about directly working on the core FAI problem, in which case, yes, we have a huge H.R. problem. Phrasing above might sound somewhat misleading; it’s not that I hired people for A.I. research but they failed at once, or that I couldn’t find anyone above the level of basic stupid failures. Rather that it takes a lot more than “beyond the basic stupid failures” to avoid clever failures and actually get stuff done, and the basic stupid failures give you some idea of the baseline level of competence beyond which we need some number of sds.
Yeah, sorry for phrasing it wrong. I guess I should have said
And yes, I did mean that you had trouble finding people to work directly on the core FAI problem.
Now I’m really curious: what were the “really, really stupid” things that were attempted?
http://lesswrong.com/lw/tf/dreams_of_ai_design/
http://lesswrong.com/lw/lq/fake_utility_functions/
and many, many other archived posts cover this.