I hadn’t thought about the specific use-case of scholar support allowing people to get help with weaknesses without having to trust that evaluators would consider those weaknesses fairly. I found that an interesting new gear.
(I think I had had some version of the air gapping evaluation from information gathering concept, but I hadn’t read your previous post on it, nor thought about applying it in this particular context)
I think an ideal world somehow makes it true, and credibly communicates that it’s true, that evaluators are can be trusted to have this sort of information, but I think that’s a hard world to build and I’m not even sure how I’d go about it.
I think a useful thing to have, meanwhile, is somewhat anonymized info about what sort of problems scholars face, with what frequency. i.e. if it’s the case that like N% of scholars are depressed, it’s probably useful for mentors to have that in their model.
Also:
We asked scholars who reported attending at least one Scholar Support session, “Assuming you got a grant instead of receiving 1-1 scholar support meetings during MATS, how much would you have needed to receive in order to be indifferent between the grant and coaching?”[4] The average and median responses were $3705 and $750, respectively. The scholar who responded $40,000 confirmed this figure was not a typo, citing support with “prioritization, project selection, navigation of alignment and general AI landscape, career advice, support during difficulties, individually tailored advice for specific meetings and projects.”
I hadn’t thought about the specific use-case of scholar support allowing people to get help with weaknesses without having to trust that evaluators would consider those weaknesses fairly. I found that an interesting new gear.
(I think I had had some version of the air gapping evaluation from information gathering concept, but I hadn’t read your previous post on it, nor thought about applying it in this particular context)
I think an ideal world somehow makes it true, and credibly communicates that it’s true, that evaluators are can be trusted to have this sort of information, but I think that’s a hard world to build and I’m not even sure how I’d go about it.
I think a useful thing to have, meanwhile, is somewhat anonymized info about what sort of problems scholars face, with what frequency. i.e. if it’s the case that like N% of scholars are depressed, it’s probably useful for mentors to have that in their model.
Also:
I liked this operationalization.