This, or something like it, is also one reason why the sorts of hopes and fears about AI that are common on LW are not so common in the rest of the world. “These people say that technological developments that might be just around the corner have the potential to reshape the world completely, and therefore we need to sink a lot of time and effort and money into worrying about ‘AI safety’; well, we’ve heard that sort of thing before. We’ve learned not to dedicate ourselves to millenarian religions, and this is just the same thing in fancy dress.”
It’s a very sensible heuristic. It will fail catastrophically any time there is an outrageously large threat or opportunity that isn’t easy to see. (Arguably it’s been doing so over the last few decades with climate change. Arguably something similar is at work in people who refuse vaccination on the grounds that COVID-19 isn’t so bad as everyone says it is.) Not using it will fail pretty badly any time there isn’t an outrageously large threat or opportunity, but there’s something that can be plausibly presented as one. I don’t know of any approach actually usable by a majority of people that doesn’t suffer one or the other of those failure modes.
This, or something like it, is also one reason why the sorts of hopes and fears about AI that are common on LW are not so common in the rest of the world. “These people say that technological developments that might be just around the corner have the potential to reshape the world completely, and therefore we need to sink a lot of time and effort and money into worrying about ‘AI safety’; well, we’ve heard that sort of thing before. We’ve learned not to dedicate ourselves to millenarian religions, and this is just the same thing in fancy dress.”
It’s a very sensible heuristic. It will fail catastrophically any time there is an outrageously large threat or opportunity that isn’t easy to see. (Arguably it’s been doing so over the last few decades with climate change. Arguably something similar is at work in people who refuse vaccination on the grounds that COVID-19 isn’t so bad as everyone says it is.) Not using it will fail pretty badly any time there isn’t an outrageously large threat or opportunity, but there’s something that can be plausibly presented as one. I don’t know of any approach actually usable by a majority of people that doesn’t suffer one or the other of those failure modes.