Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
(Agreed, though, that global warming isn’t a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as “unknown unknowns” any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)
I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.
And I dispute the characterization that, if I’m correct about the poster’s attitudes, they’re “carefully hiding conclusions [they] disagree with”; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.
Seconded. Also see:
Nick Bostrom’s Existential Risks paper from 2002
Global Catastrophic Risks
(Agreed, though, that global warming isn’t a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as “unknown unknowns” any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)
I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.
And I dispute the characterization that, if I’m correct about the poster’s attitudes, they’re “carefully hiding conclusions [they] disagree with”; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.