I think this list is interesting and potentially useful, and I think I’m glad you put it together. I also generally think it’s a good and useful norm for people to seriously engage with the arguments they (at least sort-of/overall) disagree with.
But I’m also a bit concerned about how this is currently presented. In particular:
This is titled “A list of good heuristics that the case for AI x-risk fails”.
The heuristics themselves are stated as facts, not as something like “People may believe that...” or “Some claim that...” (using words like “might” could also help).
A comment of yours suggests you’ve already noticed this. But I think it’d be pretty quick to fix.
Your final paragraph, a very useful caveat, comes after listing all the heuristics as facts.
I think these things will have relatively small downsides, given the likely quite informed and attentive audience here. But a bunch of psychological research I read a while ago (2015-2017) suggests there could be some degree of downsides. E.g.:
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning—giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning—reminding people that facts are not always properly checked before information is disseminated—was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.
Information presented in news articles can be misleading without being blatantly false. Experiment 1 examined the effects of misleading headlines that emphasize secondary content rather than the article’s primary gist. [...] We demonstrate that misleading headlines affect readers’ memory, their inferential reasoning and behavioral intentions, as well as the impressions people form of faces. On a theoretical level, we argue that these effects arise not only because headlines constrain further information processing, biasing readers toward a specific interpretation, but also because readers struggle to update their memory in order to correct initial misconceptions.
Based on that sort of research (for a tad more info on it, see here), I’d suggest:
Renaming this to something like “A list of heuristics that suggest the case for AI x-risk is weak” (or even “fails”, if you’ve said something like “suggest” or “might”)
Rephrasing the heuristics to stated as disputable (or even false) claims, rather than facts. E.g., “Some people may believe that this concern is being voiced exclusively by non-experts like Elon Musk, Steven Hawking, and the talkative crazy guy next to you on the bus.” ETA: Putting them in quote marks might be another option for that.
Moving what’s currently the final paragraph caveat to before the list of heuristics.
Perhaps also adding sub-points about the particularly disputable dot points. E.g.:
“(But note that several AI experts have now voiced concern about the possibility of major catastrophes from advanced AI system, although there’s still not consensus on this.)”
I also recognise that several of the heuristics really do seem good, and probably should make us at least somewhat less concerned about AI. So I’m not suggesting trying to make the heuristics all sound deeply flawed. I’m just suggestng perhaps being more careful not to end up with some readers’ brains, on some level, automatically processing all of these heuristics as definite truths that definitely suggest AI x-risk isn’t worth of attention.
Sorry for the very unsolicited advice! It’s just that preventing gradual slides into false beliefs (including from well-intentioned efforts that do actually contain the truth in them!) is sort of a hobby-horse of mine.
Also, one other heuristic/proposition that, as far as I’m aware, is simply factually incorrect (rather than “flawed but in debatable ways” or “actually pretty sound”) is “AI researchers didn’t come up with this concern, Hollywood did. Science fiction is constructed based on entertaining premises, not realistic capabilities of technologies.” So there it may also be worth pointing out in some manner that, in reality, quite early on prominent AI researchers raised concerns somewhat similar to those discussed now.
Whether [an intelligence explosion] will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.
I think this list is interesting and potentially useful, and I think I’m glad you put it together. I also generally think it’s a good and useful norm for people to seriously engage with the arguments they (at least sort-of/overall) disagree with.
But I’m also a bit concerned about how this is currently presented. In particular:
This is titled “A list of good heuristics that the case for AI x-risk fails”.
The heuristics themselves are stated as facts, not as something like “People may believe that...” or “Some claim that...” (using words like “might” could also help).
A comment of yours suggests you’ve already noticed this. But I think it’d be pretty quick to fix.
Your final paragraph, a very useful caveat, comes after listing all the heuristics as facts.
I think these things will have relatively small downsides, given the likely quite informed and attentive audience here. But a bunch of psychological research I read a while ago (2015-2017) suggests there could be some degree of downsides. E.g.:
And also:
Based on that sort of research (for a tad more info on it, see here), I’d suggest:
Renaming this to something like “A list of heuristics that suggest the case for AI x-risk is weak” (or even “fails”, if you’ve said something like “suggest” or “might”)
Rephrasing the heuristics to stated as disputable (or even false) claims, rather than facts. E.g., “Some people may believe that this concern is being voiced exclusively by non-experts like Elon Musk, Steven Hawking, and the talkative crazy guy next to you on the bus.” ETA: Putting them in quote marks might be another option for that.
Moving what’s currently the final paragraph caveat to before the list of heuristics.
Perhaps also adding sub-points about the particularly disputable dot points. E.g.:
“(But note that several AI experts have now voiced concern about the possibility of major catastrophes from advanced AI system, although there’s still not consensus on this.)”
I also recognise that several of the heuristics really do seem good, and probably should make us at least somewhat less concerned about AI. So I’m not suggesting trying to make the heuristics all sound deeply flawed. I’m just suggestng perhaps being more careful not to end up with some readers’ brains, on some level, automatically processing all of these heuristics as definite truths that definitely suggest AI x-risk isn’t worth of attention.
Sorry for the very unsolicited advice! It’s just that preventing gradual slides into false beliefs (including from well-intentioned efforts that do actually contain the truth in them!) is sort of a hobby-horse of mine.
Also, one other heuristic/proposition that, as far as I’m aware, is simply factually incorrect (rather than “flawed but in debatable ways” or “actually pretty sound”) is “AI researchers didn’t come up with this concern, Hollywood did. Science fiction is constructed based on entertaining premises, not realistic capabilities of technologies.” So there it may also be worth pointing out in some manner that, in reality, quite early on prominent AI researchers raised concerns somewhat similar to those discussed now.
E.g., I. J. Good apparently wrote in 1959: