I doubt anyone does. Terms catch on or fail to catch on organically (or memetically, to be precise).
Imprecision matters when you are trying to communicate and build communities.
Perhaps. But, I doubt that a significant amount of the reluctance to take the unfriendly AGI argument seriously is due to confusion over terminology. Nor is changing terminology likely to cause a lot of people who previously did not take the argument seriously to begin to take it seriously. For example, there are some regulars here on LW who do not think that unfriendly AGI is a significant risk. But I doubt that any LW regular is confused about the distinction between AGI and narrow AI.
AGI has been going back over 10 years? Longer than the term AIrisk has been around, as far as I can tell. We had strong vs weak before that.
AGIrisk seems like a good compromise? Who runs comms for the AGIrisk community?
Imprecision matters when you are trying to communicate and build communities.
I certainly prefer it to FSIrisk.
I doubt anyone does. Terms catch on or fail to catch on organically (or memetically, to be precise).
Perhaps. But, I doubt that a significant amount of the reluctance to take the unfriendly AGI argument seriously is due to confusion over terminology. Nor is changing terminology likely to cause a lot of people who previously did not take the argument seriously to begin to take it seriously. For example, there are some regulars here on LW who do not think that unfriendly AGI is a significant risk. But I doubt that any LW regular is confused about the distinction between AGI and narrow AI.