I’m wondering what people would think about adopting the term Future Super Intelligences or FSI, rather than AGI or SAGI.
This would cover more scenarios (e.g. uploads/radical augments) where the motivational systems of super powerful actors may not be what we are used to. It would also show that we are less worried about current tech than talking about AIs does, there is that moment when you have to explain you are not worried about backprop.
I never really thought that AGI implied any specific technology; certainly the “G” in AGI rules out the likelihood of AGI referring to current technology since current technology is not generally intelligent. AGI seems to capture what it is we are talking about quite well, IMO—Artificial (i.e. man-made) General Intelligence.
Do you really find yourself explaining that AGI is not the same as backpropagation very often?
AGI seems to capture what it is we are talking about quite well, IMO—Artificial (i.e. man-made) General Intelligence.
I think AGI gets compressed to AI by the mainstream media and then people working on current ML think that you people are worried about their work (which they find ridiculous and so don’t want to engage).
I think AGI gets compressed to AI by the mainstream media
Actually, the term AI has traditionally encompassed both strong AI (a term which has been mostly replaced by LW and others with “AGI”) and applied (or narrow) AI, which includes expert systems, current ML, game playing, natural language processing, etc. It is not clear to me that the mainstream media is compressing AGI into AI; instead I suspect that many mainstream media writers simply have not yet adopted the term AGI, which is a fairly recent addition to the AI jargon. The mainstream media’s use of “AI” and the term “AIRISK” are not so much wrong as they are imprecise.
I suspect that the term AGI was coined specifically to differentiate strong AI from narrow AI. If the mainstream media has been slow in adopting the term AGI, I don’t see how adding yet another, newer term will help—in fact, doing so will probably just engender confusion (e.g. people will wonder what if anything is the difference between AGI and FSI).
I doubt anyone does. Terms catch on or fail to catch on organically (or memetically, to be precise).
Imprecision matters when you are trying to communicate and build communities.
Perhaps. But, I doubt that a significant amount of the reluctance to take the unfriendly AGI argument seriously is due to confusion over terminology. Nor is changing terminology likely to cause a lot of people who previously did not take the argument seriously to begin to take it seriously. For example, there are some regulars here on LW who do not think that unfriendly AGI is a significant risk. But I doubt that any LW regular is confused about the distinction between AGI and narrow AI.
I’m wondering what people would think about adopting the term Future Super Intelligences or FSI, rather than AGI or SAGI.
This would cover more scenarios (e.g. uploads/radical augments) where the motivational systems of super powerful actors may not be what we are used to. It would also show that we are less worried about current tech than talking about AIs does, there is that moment when you have to explain you are not worried about backprop.
I never really thought that AGI implied any specific technology; certainly the “G” in AGI rules out the likelihood of AGI referring to current technology since current technology is not generally intelligent. AGI seems to capture what it is we are talking about quite well, IMO—Artificial (i.e. man-made) General Intelligence.
Do you really find yourself explaining that AGI is not the same as backpropagation very often?
I think AGI gets compressed to AI by the mainstream media and then people working on current ML think that you people are worried about their work (which they find ridiculous and so don’t want to engage).
An example of the compression is here.
We don’t help ourselves calling it airisk.
Actually, the term AI has traditionally encompassed both strong AI (a term which has been mostly replaced by LW and others with “AGI”) and applied (or narrow) AI, which includes expert systems, current ML, game playing, natural language processing, etc. It is not clear to me that the mainstream media is compressing AGI into AI; instead I suspect that many mainstream media writers simply have not yet adopted the term AGI, which is a fairly recent addition to the AI jargon. The mainstream media’s use of “AI” and the term “AIRISK” are not so much wrong as they are imprecise.
I suspect that the term AGI was coined specifically to differentiate strong AI from narrow AI. If the mainstream media has been slow in adopting the term AGI, I don’t see how adding yet another, newer term will help—in fact, doing so will probably just engender confusion (e.g. people will wonder what if anything is the difference between AGI and FSI).
AGI has been going back over 10 years? Longer than the term AIrisk has been around, as far as I can tell. We had strong vs weak before that.
AGIrisk seems like a good compromise? Who runs comms for the AGIrisk community?
Imprecision matters when you are trying to communicate and build communities.
I certainly prefer it to FSIrisk.
I doubt anyone does. Terms catch on or fail to catch on organically (or memetically, to be precise).
Perhaps. But, I doubt that a significant amount of the reluctance to take the unfriendly AGI argument seriously is due to confusion over terminology. Nor is changing terminology likely to cause a lot of people who previously did not take the argument seriously to begin to take it seriously. For example, there are some regulars here on LW who do not think that unfriendly AGI is a significant risk. But I doubt that any LW regular is confused about the distinction between AGI and narrow AI.