How do you know this? It’s true that their utility functions aren’t linear, but it doesn’t follow that that’s why they don’t take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
It’s pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.
It’s pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.