FWIW, I find it hard to make judgements on these kinds of aggregate statistics, and would be kind of surprised if other people know how to make judgements on these either. “Having worked at a scaling lab” or “being involved with a AI Safety grantmaking organization” or “being interested in AI control” just aren’t that informative pieces of information, especially if I don’t even have individual profiles.
My sense is that if you want people to be able to actually come to trust your criteria, you will either have to be more specific, or just list people’s names (the latter of which would be ideal, and also would create concrete points of accountability).
When onboarding advisors, we made it clear that we would not reveal their identities without their consent. I certainly don’t want to require that our advisors make their identities public, as I believe this might compromise the intent of anonymous peer review: to obtain genuine assessment, without fear of bias or reprisals. As with most academic journals, the integrity of the process is dependent on the editors; in this case, the MATS team and our primary funders.
It’s possible that a mere list of advisor names (without associated ratings) would be sufficient to ensure public trust in our process without compromising the peer review process. We plan to explore this option with our advisors in future.
Yeah, it’s definitely a kind of messy tradeoff. My sense is just that the aggregate statistics you provided didn’t have that many bits of evidence that would allow me to independently audit a trust chain.
A thing that I do think might be more feasible is to make it opt-in for advisors to be public. E.g. SFF only had a minority of recommenders be public about their identity, but I do still think it helps a good amount to have some names.
(Also, just for historical consistency: Most peer review in the history of science was not anonymous. Anonymous peer review is a quite recent invention, and IMO not one with a great track record. Editorial peer review with non-anonymous references was more common throughout the history of the sciences. Emulating anonymous peer review without comparing it to the other options that IMO have a better track record seems a bit cargo-culty to me)
FWIW, I find it hard to make judgements on these kinds of aggregate statistics, and would be kind of surprised if other people know how to make judgements on these either. “Having worked at a scaling lab” or “being involved with a AI Safety grantmaking organization” or “being interested in AI control” just aren’t that informative pieces of information, especially if I don’t even have individual profiles.
My sense is that if you want people to be able to actually come to trust your criteria, you will either have to be more specific, or just list people’s names (the latter of which would be ideal, and also would create concrete points of accountability).
When onboarding advisors, we made it clear that we would not reveal their identities without their consent. I certainly don’t want to require that our advisors make their identities public, as I believe this might compromise the intent of anonymous peer review: to obtain genuine assessment, without fear of bias or reprisals. As with most academic journals, the integrity of the process is dependent on the editors; in this case, the MATS team and our primary funders.
It’s possible that a mere list of advisor names (without associated ratings) would be sufficient to ensure public trust in our process without compromising the peer review process. We plan to explore this option with our advisors in future.
Yeah, it’s definitely a kind of messy tradeoff. My sense is just that the aggregate statistics you provided didn’t have that many bits of evidence that would allow me to independently audit a trust chain.
A thing that I do think might be more feasible is to make it opt-in for advisors to be public. E.g. SFF only had a minority of recommenders be public about their identity, but I do still think it helps a good amount to have some names.
(Also, just for historical consistency: Most peer review in the history of science was not anonymous. Anonymous peer review is a quite recent invention, and IMO not one with a great track record. Editorial peer review with non-anonymous references was more common throughout the history of the sciences. Emulating anonymous peer review without comparing it to the other options that IMO have a better track record seems a bit cargo-culty to me)
In this comment we list the names of some of our advisors.