Fewer NLP researchers would have taken AGI seriously, identified understanding its risks as a significant priority, and considered it catastrophic.
I particularly found it interesting that underrepresented researcher groups were more concerned (though less surprising in hindsight, especially considering the diversity of interpretations of catastrophe). I wonder how well the alignment community is doing with outreach to those groups.
There were more scaling maximalists (like the survey respondents did)
I was also encouraged that the majority of people thought the majority of research is crap.
...Though not sure how that math exactly works out. Unless people are self-aware of their publishing crap :P
Sure! Prior to this survey I would have thought:
Fewer NLP researchers would have taken AGI seriously, identified understanding its risks as a significant priority, and considered it catastrophic.
I particularly found it interesting that underrepresented researcher groups were more concerned (though less surprising in hindsight, especially considering the diversity of interpretations of catastrophe). I wonder how well the alignment community is doing with outreach to those groups.
There were more scaling maximalists (like the survey respondents did)
I was also encouraged that the majority of people thought the majority of research is crap.
...Though not sure how that math exactly works out. Unless people are self-aware of their publishing crap :P