I think the explanation that more research is closed source pretty compactly explains the issue, combined with labs/companies making a lot of the alignment progress to date.
Also, you probably won’t hear about most incremental AI alignment progress on LW, for the simple reason that it probably would be flooded with it, so people will underestimate progress.
Alexander Gietelink Oldenziel does talk about pockets of Deep Expertise in academia, but they aren’t activated right now, so it is so far irrelevant to progress.
I think the explanation that more research is closed source pretty compactly explains the issue, combined with labs/companies making a lot of the alignment progress to date.
Also, you probably won’t hear about most incremental AI alignment progress on LW, for the simple reason that it probably would be flooded with it, so people will underestimate progress.
Alexander Gietelink Oldenziel does talk about pockets of Deep Expertise in academia, but they aren’t activated right now, so it is so far irrelevant to progress.