As a participant, I think the claim that MSFP was a resounding success is a little strong. It’s not at all clear to me that anyone gained new skills by attending (at least, I don’t feel like I did), as distinct from learning about new ideas, using their existing skills, becoming convinced of various positions, and making social connections (which are more than enough to explain the new hires). To me it was an interesting experiment whose results I find hard to evaluate.
I don’t claim that it developed skill and talent in all participants, nor even in the median participant. I do stand by my claim that it appears to have had drastic good effects on a few people though, and that it led directly to MIRI hires, at least one of which would not have happened otherwise :-)
I don’t claim that it developed skill and talent in all participants, nor even in the median participant.
And yet you called it “a resounding success”. Does that mean that you’re focusing on the crème de la crème, the top tier of the participants, while being less concerned with what’s happening in lower quantiles?
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was “a few people are nudged slightly more towards becoming AI alignment researchers someday”, and that the outcome of “actually cause at least one very talented person to become AI alignment researcher who otherwise would not have, over the course of three weeks” was clearly in “resounding success” territory, whereas “turn half the attendees into AI alignment researchers” is in I’ll-eat-my-hat territory.)
I would expect not for a paid workshop! Unlike CFAR’s core workshops, which are highly polished and get median 9⁄10 and 10⁄10 “are you glad you came” ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
I am not saying it wasn’t a worthwhile effort (and I agreed to help look into this data, right?) I am just saying if your definition of “resounding success” is one that cannot be used to market this workshop in the future, that definition is a little peculiar...
In general, it’s hard to find effects of anything in the data.
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely “effective”.
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
That seems a little surprising to me. Even if CFAR weren’t involved at all, I’d naively have expected that eg. having people practice formal logic problems from a textbook would cause skill gains in formal logic. Could you talk a bit about what kinds of skills you think MSFP was attempting to teach?
Thanks for writing this up!
As a participant, I think the claim that MSFP was a resounding success is a little strong. It’s not at all clear to me that anyone gained new skills by attending (at least, I don’t feel like I did), as distinct from learning about new ideas, using their existing skills, becoming convinced of various positions, and making social connections (which are more than enough to explain the new hires). To me it was an interesting experiment whose results I find hard to evaluate.
I don’t claim that it developed skill and talent in all participants, nor even in the median participant. I do stand by my claim that it appears to have had drastic good effects on a few people though, and that it led directly to MIRI hires, at least one of which would not have happened otherwise :-)
And yet you called it “a resounding success”. Does that mean that you’re focusing on the crème de la crème, the top tier of the participants, while being less concerned with what’s happening in lower quantiles?
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was “a few people are nudged slightly more towards becoming AI alignment researchers someday”, and that the outcome of “actually cause at least one very talented person to become AI alignment researcher who otherwise would not have, over the course of three weeks” was clearly in “resounding success” territory, whereas “turn half the attendees into AI alignment researchers” is in I’ll-eat-my-hat territory.)
For this unusual, MIRI-comissioned workshop, yes.
Is CFAR going to market themselves like this?
[at the workshop]:
“Look to the left of you, now to the right of you, now in 12 other directions. Only one of you will have a strong positive effect from this workshop.”
I would expect not for a paid workshop! Unlike CFAR’s core workshops, which are highly polished and get median 9⁄10 and 10⁄10 “are you glad you came” ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
I am not saying it wasn’t a worthwhile effort (and I agreed to help look into this data, right?) I am just saying if your definition of “resounding success” is one that cannot be used to market this workshop in the future, that definition is a little peculiar...
In general, it’s hard to find effects of anything in the data.
The value of running a workshop and the things you can use to market a workshop are distinct, and that seems to explain it.
The fact that a workshop is in a lovely venue is a good thing for marketing, and irrelevant to the value of running it. That is not confusing.
Sure, but for example things used to market a charity and effectiveness of charity are distinct.
People worry about “effectiveness.” Is that going out the window in this case?
See Nate’s comment above:
http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely “effective”.
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
The search for Kwisatz Haderach is serious business.
That seems a little surprising to me. Even if CFAR weren’t involved at all, I’d naively have expected that eg. having people practice formal logic problems from a textbook would cause skill gains in formal logic. Could you talk a bit about what kinds of skills you think MSFP was attempting to teach?