I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.
I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.