We want to go through the different research agendas (and I already knew about yours), as they give different views/paradigms on AI Alignment. Yet I’m not sure how relevant a review of such posts are. In a sense, the “reviewable” part is the actual research that underlies the agenda, right?
I don’t see a good reason to exclude agenda-style posts, but I do think it’d be important to treat them differently from more here-is-a-specific-technical-result posts.
Broadly, we’d want to be improving the top-level collective AI alignment research ‘algorithm’. With that in mind, I don’t see an area where more feedback/clarification/critique of some kind wouldn’t be helpful. The questions seem to be: What form should feedback/review… take in a given context? Where is it most efficient to focus our efforts?
Productive feedback/clarification on high-level agendas seems potentially quite efficient. My worry would be to avoid excessive selection pressure towards paths that are clear and simply justified. However, where an agenda does use specific assumptions and arguments to motivate its direction, early ‘review’ seems useful.
Thanks for the suggestion!
We want to go through the different research agendas (and I already knew about yours), as they give different views/paradigms on AI Alignment. Yet I’m not sure how relevant a review of such posts are. In a sense, the “reviewable” part is the actual research that underlies the agenda, right?
I don’t see a good reason to exclude agenda-style posts, but I do think it’d be important to treat them differently from more here-is-a-specific-technical-result posts.
Broadly, we’d want to be improving the top-level collective AI alignment research ‘algorithm’. With that in mind, I don’t see an area where more feedback/clarification/critique of some kind wouldn’t be helpful.
The questions seem to be:
What form should feedback/review… take in a given context?
Where is it most efficient to focus our efforts?
Productive feedback/clarification on high-level agendas seems potentially quite efficient. My worry would be to avoid excessive selection pressure towards paths that are clear and simply justified. However, where an agenda does use specific assumptions and arguments to motivate its direction, early ‘review’ seems useful.