I don’t see a good reason to exclude agenda-style posts, but I do think it’d be important to treat them differently from more here-is-a-specific-technical-result posts.
Broadly, we’d want to be improving the top-level collective AI alignment research ‘algorithm’. With that in mind, I don’t see an area where more feedback/clarification/critique of some kind wouldn’t be helpful. The questions seem to be: What form should feedback/review… take in a given context? Where is it most efficient to focus our efforts?
Productive feedback/clarification on high-level agendas seems potentially quite efficient. My worry would be to avoid excessive selection pressure towards paths that are clear and simply justified. However, where an agenda does use specific assumptions and arguments to motivate its direction, early ‘review’ seems useful.
I don’t see a good reason to exclude agenda-style posts, but I do think it’d be important to treat them differently from more here-is-a-specific-technical-result posts.
Broadly, we’d want to be improving the top-level collective AI alignment research ‘algorithm’. With that in mind, I don’t see an area where more feedback/clarification/critique of some kind wouldn’t be helpful.
The questions seem to be:
What form should feedback/review… take in a given context?
Where is it most efficient to focus our efforts?
Productive feedback/clarification on high-level agendas seems potentially quite efficient. My worry would be to avoid excessive selection pressure towards paths that are clear and simply justified. However, where an agenda does use specific assumptions and arguments to motivate its direction, early ‘review’ seems useful.