Another reason the broader ML field may be reluctant to discuss AGI is a cultural shift in the field that happened after the AI winters. I’m quoting part of A Bird’s Eye View of the ML Field [Pragmatic AI Safety #2] where I first saw this idea:
AI winter made it less acceptable to talk about AGI specifically, and people don’t like people talking about capabilities making it closer. Discussions of AGI are not respectable, unlike in physics where talking about weirder long-term things and extrapolating several orders of magnitude is normal. AGI is a bit more like talking about nuclear fusion, which has a long history of overpromises. In industry it has become somewhat more acceptable to mention AGI than in academia: for instance, Sam Altman recently tweeted “AGI is gonna be wild” and Yann LeCun has recently discussed the path to human-level AI.
In general, the aversion to discussing AGI makes discussing risks from AGI a tough sell.
Anyway, I think your post (“The inordinately slow spread...”) is good. Figuring out how to get the broader ML community to talk more explicitly about AGI and care more about AGI x-risk would be a huge win.
Another reason the broader ML field may be reluctant to discuss AGI is a cultural shift in the field that happened after the AI winters. I’m quoting part of A Bird’s Eye View of the ML Field [Pragmatic AI Safety #2] where I first saw this idea:
Anyway, I think your post (“The inordinately slow spread...”) is good. Figuring out how to get the broader ML community to talk more explicitly about AGI and care more about AGI x-risk would be a huge win.