To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
I expect this to be less feasible as the field grows, especially as new researchers enter the field who do not have strong connections yet. For example in my own work it has been useful to share things with folks to get early feedback but then there is also value in comments I received in peer review to point out things like related work none of us was aware of (based on my publishing results in mathematics some years ago).
Yeah, I think I agree with this. You are still able to get peer review from the people you work with, if you work at an organization, but it is preferable to get more varied feedback, and some people may not work at an organization.
I expect this to be less feasible as the field grows, especially as new researchers enter the field who do not have strong connections yet. For example in my own work it has been useful to share things with folks to get early feedback but then there is also value in comments I received in peer review to point out things like related work none of us was aware of (based on my publishing results in mathematics some years ago).
Yeah, I think I agree with this. You are still able to get peer review from the people you work with, if you work at an organization, but it is preferable to get more varied feedback, and some people may not work at an organization.