It seems to me that the first and the second examples have something in common, namely an underestimate of the degree to which people will react to perceived dangers. I think this is fairly common in speculations about potential future disasters, and have called it sleepwalk bias. It seems like something that one should be able to correct for.
I think there is an element of sleepwalk bias in the AI risk debate. See this post where I criticise a particular vignette.
Thanks for this thoughtful article.
It seems to me that the first and the second examples have something in common, namely an underestimate of the degree to which people will react to perceived dangers. I think this is fairly common in speculations about potential future disasters, and have called it sleepwalk bias. It seems like something that one should be able to correct for.
I think there is an element of sleepwalk bias in the AI risk debate. See this post where I criticise a particular vignette.