A meta-related comment from someone who’s not deep into alignment (yet) but does work in AI/academia.
My impression on reading LessWrong has been that the people who are deep into alignment research are generally spending a great deal of their time working on their own independent research agendas, which—naturally—they feel are the most fruitful paths to take for alignment.
I’m glad that we seem to be seeing a few more posts of this nature recently (e.g. with Infra-Bayes, etc) where established researchers spend more of their time both investigating and critiquing others’ approaches. This is one good way to get alignment researchers to stack more, imo.
A meta-related comment from someone who’s not deep into alignment (yet) but does work in AI/academia.
My impression on reading LessWrong has been that the people who are deep into alignment research are generally spending a great deal of their time working on their own independent research agendas, which—naturally—they feel are the most fruitful paths to take for alignment.
I’m glad that we seem to be seeing a few more posts of this nature recently (e.g. with Infra-Bayes, etc) where established researchers spend more of their time both investigating and critiquing others’ approaches. This is one good way to get alignment researchers to stack more, imo.