I’m arguing exactly the opposite; experts want to make comparisons carefully, and those trying to transmit the case to the general public should, at this point, stop using these rhetorical shortcuts that imply wrong and misleading things.
This book argues (convincingly IMO) that it’s impossible to communicate, or even think, anything whatsoever, without the use of analogies.
If you say “AI runs on computer chips”, then the listener will parse those words by conjuring up their previous distilled experience of things-that-run-on-computer-chips, and that previous experience will be helpful in some ways and misleading in other ways.
If you say “AI is a system that…” then the listener will parse those words by conjuring up their previous distilled experience of so-called “systems”, and that previous experience will be helpful in some ways and misleading in other ways.
Etc. Right?
If you show me an introduction to AI risk for amateurs that you endorse, then I will point out the “rhetorical shortcuts that imply wrong and misleading things” that it contains—in the sense that it will have analogies between powerful AI and things-that-are-not-powerful-AI, and those analogies will be misleading in some ways (when stripped from their context and taken too far). This is impossible to avoid.
Anyway, if someone says:
When it comes to governing technology, there are some areas, like inventing new programming languages, where it’s awesome for millions of hobbyists to be freely messing around; and there are other areas, like inventing new viruses, or inventing new uranium enrichment techniques, where we definitely don’twant millions of hobbyists to be freely messing around, but instead we want to be thinking hard about regulation and secrecy. Let me explain why AI belongs in the latter category…
…then I think that’s a fine thing to say. It’s not a rhetorical shortcut, rather it’s a way to explain what you’re saying, pedagogically, by connecting it to the listener’s existing knowledge and mental models.
I agree with you that analogies are needed, but they are also inevitably limited. So I’m fine with saying “AI is concerning because its progress is exponential, and we have seen from COVID-19 that we need to intervene early,” or “AI is concerning because it can proliferate as a technology like nuclear weapons,” or “AI is like biological weapons in that countries will pursue and use these because they seem powerful, without appreciating the dangers they create if they escape control.” But what I am concerned that you are suggesting is that we should make the general claim “AI poses uncontrollable risks like pathogens do,” or “AI needs to be regulated the way biological pathogens are,” and that’s something I strongly oppose. By ignoring all of the specifics, the analogy fails.
In other words, “while I think the disanalogies are compelling, comparison can still be useful as an analytic tool—while keeping in mind that the ability to directly learn lessons from biorisk to apply to AI is limited by the vast array of other disanalogies.”
I agree with this point when it comes to technical discussions. I would like to add the caveat that when talking to a total amateur, the sentence:
Is the fastest way I’ve found to transmit information. Maybe 30% of the entire AI risk case can be delivered in the first four words.
I’m arguing exactly the opposite; experts want to make comparisons carefully, and those trying to transmit the case to the general public should, at this point, stop using these rhetorical shortcuts that imply wrong and misleading things.
This book argues (convincingly IMO) that it’s impossible to communicate, or even think, anything whatsoever, without the use of analogies.
If you say “AI runs on computer chips”, then the listener will parse those words by conjuring up their previous distilled experience of things-that-run-on-computer-chips, and that previous experience will be helpful in some ways and misleading in other ways.
If you say “AI is a system that…” then the listener will parse those words by conjuring up their previous distilled experience of so-called “systems”, and that previous experience will be helpful in some ways and misleading in other ways.
Etc. Right?
If you show me an introduction to AI risk for amateurs that you endorse, then I will point out the “rhetorical shortcuts that imply wrong and misleading things” that it contains—in the sense that it will have analogies between powerful AI and things-that-are-not-powerful-AI, and those analogies will be misleading in some ways (when stripped from their context and taken too far). This is impossible to avoid.
Anyway, if someone says:
…then I think that’s a fine thing to say. It’s not a rhetorical shortcut, rather it’s a way to explain what you’re saying, pedagogically, by connecting it to the listener’s existing knowledge and mental models.
I agree with you that analogies are needed, but they are also inevitably limited. So I’m fine with saying “AI is concerning because its progress is exponential, and we have seen from COVID-19 that we need to intervene early,” or “AI is concerning because it can proliferate as a technology like nuclear weapons,” or “AI is like biological weapons in that countries will pursue and use these because they seem powerful, without appreciating the dangers they create if they escape control.” But what I am concerned that you are suggesting is that we should make the general claim “AI poses uncontrollable risks like pathogens do,” or “AI needs to be regulated the way biological pathogens are,” and that’s something I strongly oppose. By ignoring all of the specifics, the analogy fails.
In other words, “while I think the disanalogies are compelling, comparison can still be useful as an analytic tool—while keeping in mind that the ability to directly learn lessons from biorisk to apply to AI is limited by the vast array of other disanalogies.”