A possible counterpoint, that you are mostly advocating for awareness as opssosed to specific points is null, since pretty much everyone is aware of the problem now—both society as a whole, policymakers in particular, and people in AI research and alignment.
I think this specific point is false, especially outside of tech circles. My experience has been that while people are concerned about AI in general, and very open to X-risk when they hear about it, there is zero awareness of X-risk beyond popular fiction. It’s possible that my sample isn’t representative here, but I would expect that to swing in the other direction, given that the folks I interact with are often well-educated New-York-Times-reading types, who are going to be more informed than average.
Even among those aware, there’s also a difference between far-mode “awareness” in the sense of X-risk as some far away academic problem, and near-mode “awareness” in the sense of “oh shit, maybe this could actually impact me.” Hearing a bunch of academic arguments, but never seeing anybody actually getting fired up or protesting, will implicitly cause people to put X-risk in the first bucket. Because if they personally believed it to be big a near-term risk, they’d certainly be angry and protesting, and if other people aren’t, that’s a signal other people don’t really take it seriously. People sense a missing mood here and update on it.
I think this specific point is false, especially outside of tech circles. My experience has been that while people are concerned about AI in general, and very open to X-risk when they hear about it, there is zero awareness of X-risk beyond popular fiction. It’s possible that my sample isn’t representative here, but I would expect that to swing in the other direction, given that the folks I interact with are often well-educated New-York-Times-reading types, who are going to be more informed than average.
Even among those aware, there’s also a difference between far-mode “awareness” in the sense of X-risk as some far away academic problem, and near-mode “awareness” in the sense of “oh shit, maybe this could actually impact me.” Hearing a bunch of academic arguments, but never seeing anybody actually getting fired up or protesting, will implicitly cause people to put X-risk in the first bucket. Because if they personally believed it to be big a near-term risk, they’d certainly be angry and protesting, and if other people aren’t, that’s a signal other people don’t really take it seriously. People sense a missing mood here and update on it.