So, what’s really needed is a meta-evaluation organization, which can help donors choose how much effort to direct toward direct results and how much toward evaluation, and how much toward outreach (and to evaluate the evaluators). And then a meta-meta-evaluation to figure out how to rate and value the evaluator-evaluators. And so on.
My guess is each level should get handwave-exponentially-fewer resources, and that it converges to zero people working seriously on meta-meta-meta-evaluation, and only fractional people in ad-hoc ways even on meta-meta-evaluation. But the overall topic might be big enough now to have a university group doing studies on relative effectiveness of EA aggregators compared to each other and to direct action groups.
I think one level of meta-evaluation should be sufficient :-) Namely, one organization that would help donors decide how much efforts to put into nonprofits dedicated to promoting EA, nonprofits dedicated to evaluating charities, and direct action nonprofits.
So, what’s really needed is a meta-evaluation organization, which can help donors choose how much effort to direct toward direct results and how much toward evaluation, and how much toward outreach (and to evaluate the evaluators). And then a meta-meta-evaluation to figure out how to rate and value the evaluator-evaluators. And so on.
My guess is each level should get handwave-exponentially-fewer resources, and that it converges to zero people working seriously on meta-meta-meta-evaluation, and only fractional people in ad-hoc ways even on meta-meta-evaluation. But the overall topic might be big enough now to have a university group doing studies on relative effectiveness of EA aggregators compared to each other and to direct action groups.
I think one level of meta-evaluation should be sufficient :-) Namely, one organization that would help donors decide how much efforts to put into nonprofits dedicated to promoting EA, nonprofits dedicated to evaluating charities, and direct action nonprofits.