The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with “half of us think there’s a 10+% chance of our work resulting in an existential catastrophe”.
In fairness, this is not quite half the researchers. This is half the agreed survey.
I recall that they tried to advertise / describe the survey in a way that would minimize response bias—like, they didn’t say “COME TAKE OUR SURVEY ABOUT AI DOOM”. That said, I am nevertheless still very concerned about response bias, and I strongly agree that the OP’s wording “48% of researchers” is a mistake that should be corrected.
I figured this would be obvious enough, and both surveys discuss this issue; but phrasing things in a way that encourages keeping selection bias in mind does seem like a good idea to me. I’ve tweaked the phrasing to say “In a survey, X”.
In fairness, this is not quite half the researchers. This is half the agreed survey.
‘We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. [...] We received 738 responses, some partial, for a 17% response rate’.
I expect that worried researchers are more likely to agree to participate in the survey.
I recall that they tried to advertise / describe the survey in a way that would minimize response bias—like, they didn’t say “COME TAKE OUR SURVEY ABOUT AI DOOM”. That said, I am nevertheless still very concerned about response bias, and I strongly agree that the OP’s wording “48% of researchers” is a mistake that should be corrected.
I figured this would be obvious enough, and both surveys discuss this issue; but phrasing things in a way that encourages keeping selection bias in mind does seem like a good idea to me. I’ve tweaked the phrasing to say “In a survey, X”.