I’m not sure how much space to give the more unreasonable criticisms like the ones you point out. My call would be that the most high quality considerations in all directions should be prioritized over critics being influential or figures of authority—although of course that these voices exist deserves mention, although it might illustrate less the factual dimension than the social one.
I agree those criticisms are pretty unreasonable. However I think they are representative of the discourse—e.g. Yann LeCun is a very important and influential person, and also an AI expert, so he’s not cherry-picked.
I will say this: those pieces all make a case for extraordinary risks from AI (albeit in different ways); I am somewhat surprised that I have not been able to find a work of similar intellectual depth arguing that the risks posed by ASI are mostly of “ordinary” types which humanity knows how to deal with. This is often asserted as “obviously” true, and given a brief treatment; unfortunately-often the rebuttal is mere proof by ridicule, or by lack-of-imagination (often people whose main motivation appears to be that people they don’t like are worried about ASI xrisk). It’s perhaps not so surprising: “the sky is not falling” is not an obvious target for a serious book-length treatment. Still, I hope someone insightful and imaginative will fill the gap. Three brief-but-stimulating shorter treatments are: Anthony Zador and Yann LeCun, Don’t Fear the Terminator (2019); Katja Grace, Counterarguments to the basic AI x-risk case (2022); and: David Krueger, A list of good heuristics that the case for AI x-risk fails (2019).↩︎
i.e. he thinks there just isn’t much actually good criticism out there, to the point where he thinks LeCun is one of the top three!!!! (And note that the other two aren’t exactly harsh critics, they are kinda AI safety people playing devil’s advocate...)
Completely agreed on the state of the discourse. I think the more interesting discussions start once you acknowledge at least the vague general possibility of serious risk (see e.g. the recent debate posts on the EA forum). I still think these are wrong, but at least worth engaging with.
If I was giving a course, I just wouldn’t really know what to do with actively bad opinions beyond “this person says XYZ” and maybe having the students reason about it as an exercise. But if you do this too much it feels like gloating.
Honestly I think the strongest criticism will come from someone arguing that there’s not enough leverage in our world for superintelligence to be much more powerful than us, for good or bad. People who argue that ASI is absolutely necessary because it will make us immortal and colonise the stars but doesn’t warrant any worry about the possibility it may direct its vast power to less desirable goals are just unserious though. Also obviously the possibility that AGI may actually be still far off, but that doesn’t say much about whether it’s dangerous, just whether the danger is imminent.
I’m not sure how much space to give the more unreasonable criticisms like the ones you point out. My call would be that the most high quality considerations in all directions should be prioritized over critics being influential or figures of authority—although of course that these voices exist deserves mention, although it might illustrate less the factual dimension than the social one.
I agree those criticisms are pretty unreasonable. However I think they are representative of the discourse—e.g. Yann LeCun is a very important and influential person, and also an AI expert, so he’s not cherry-picked.
Also see this recent review from someone who seems thoughtful and respected: Notes on Existential Risk from Artificial Superintelligence (michaelnotebook.com) who says
i.e. he thinks there just isn’t much actually good criticism out there, to the point where he thinks LeCun is one of the top three!!!! (And note that the other two aren’t exactly harsh critics, they are kinda AI safety people playing devil’s advocate...)
Completely agreed on the state of the discourse. I think the more interesting discussions start once you acknowledge at least the vague general possibility of serious risk (see e.g. the recent debate posts on the EA forum). I still think these are wrong, but at least worth engaging with.
If I was giving a course, I just wouldn’t really know what to do with actively bad opinions beyond “this person says XYZ” and maybe having the students reason about it as an exercise. But if you do this too much it feels like gloating.
Honestly I think the strongest criticism will come from someone arguing that there’s not enough leverage in our world for superintelligence to be much more powerful than us, for good or bad. People who argue that ASI is absolutely necessary because it will make us immortal and colonise the stars but doesn’t warrant any worry about the possibility it may direct its vast power to less desirable goals are just unserious though. Also obviously the possibility that AGI may actually be still far off, but that doesn’t say much about whether it’s dangerous, just whether the danger is imminent.