In this specific case, I bet workshop participants would actually find it fun + worthwhile to take before/after big 5 & Raven’s surveys, so it could be a value-add in addition to a benchmarking metric.
Note that workshop participants already do a fair amount of answering questions beforehand (and a year later) to give a sense of how they progress, which I think actually ties in more with what the program is supposed to teach.
(My recollection was that the survey approximately maxed out the amount of time/attention I was willing to spend on surveys, although I’m not sure)
Huh. I’m surprised that after finding significant changes on well-validated psychological instruments in the 2015 study, CFAR didn’t incorporate these instruments into their pre- / post-workshop assessments.
Also surprised that they dropped them from the 2017 impact analysis.
The 2017 impact analysis seems to be EA safety focused. When their theory of impact is about EA safety it’s plausible to me that this made analysis by standard metrics less important for them.
Note that workshop participants already do a fair amount of answering questions beforehand (and a year later) to give a sense of how they progress, which I think actually ties in more with what the program is supposed to teach.
(My recollection was that the survey approximately maxed out the amount of time/attention I was willing to spend on surveys, although I’m not sure)
Huh. I’m surprised that after finding significant changes on well-validated psychological instruments in the 2015 study, CFAR didn’t incorporate these instruments into their pre- / post-workshop assessments.
Also surprised that they dropped them from the 2017 impact analysis.
The 2017 impact analysis seems to be EA safety focused. When their theory of impact is about EA safety it’s plausible to me that this made analysis by standard metrics less important for them.
Do you mean “AI safety focused”?