I don’t claim that it developed skill and talent in all participants, nor even in the median participant.
And yet you called it “a resounding success”. Does that mean that you’re focusing on the crème de la crème, the top tier of the participants, while being less concerned with what’s happening in lower quantiles?
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was “a few people are nudged slightly more towards becoming AI alignment researchers someday”, and that the outcome of “actually cause at least one very talented person to become AI alignment researcher who otherwise would not have, over the course of three weeks” was clearly in “resounding success” territory, whereas “turn half the attendees into AI alignment researchers” is in I’ll-eat-my-hat territory.)
And yet you called it “a resounding success”. Does that mean that you’re focusing on the crème de la crème, the top tier of the participants, while being less concerned with what’s happening in lower quantiles?
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was “a few people are nudged slightly more towards becoming AI alignment researchers someday”, and that the outcome of “actually cause at least one very talented person to become AI alignment researcher who otherwise would not have, over the course of three weeks” was clearly in “resounding success” territory, whereas “turn half the attendees into AI alignment researchers” is in I’ll-eat-my-hat territory.)
For this unusual, MIRI-comissioned workshop, yes.