So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make.
This sentence smuggles in the assumption that we are in a position to reduce existential risk.
Two big risk are global warming and nuclear war.
The projections for large changes in climate depend on continuing growth in wealth and population in order to get the high levels of carbon dioxide emission needed to create the change. If it really goes horribly wrong, we are still looking at a self limiting problem with billions dying but billions living on in nuclear powered prosperity at higher latitudes. It is not an existential risk.
Nuclear war in the next hundred years is shaping up to be 2nd rank nations duking it out with 20kilo-ton fission weapons, not 200kilo-ton fusion weapons. That is enough to change building codes in ways that make current earthquake precautions seem cheap, but it is a long way short of an existential risk.
Existential risk might be large, but it comes from the unkown unkowns, not the known unkowns, and not even kowning that we don’t know there is nothing useful we can do beyond maintaining a willingness to recognise a new danger if it makes its possibility known.
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any. [...] I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately.
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
(Agreed, though, that global warming isn’t a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as “unknown unknowns” any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)
I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.
And I dispute the characterization that, if I’m correct about the poster’s attitudes, they’re “carefully hiding conclusions [they] disagree with”; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.
I agree with your characterization of the risks of global warming and nuclear war. I get the impression that people allow the reasonably high probability of a few degrees of warming or a few nuclear attacks to unduly influence their estimates of the probability of true existential risk from these sources.
In both cases I’m much more receptive to discussions of harm reduction than to scaremongering about ‘the end of the world as we know it’. The twentieth century has quite a few examples of events that caused 10s of millions of deaths and yet did not represent existential risks. Moderate global warming or a few nuclear detonations in or over major cities would be highly disruptive events and would have a high cost in human lives and are certainly legitimate concerns but they are not existential risks and talking of them as such is unhelpful in my opinion.
This sentence smuggles in the assumption that we are in a position to reduce existential risk.
Two big risk are global warming and nuclear war.
The projections for large changes in climate depend on continuing growth in wealth and population in order to get the high levels of carbon dioxide emission needed to create the change. If it really goes horribly wrong, we are still looking at a self limiting problem with billions dying but billions living on in nuclear powered prosperity at higher latitudes. It is not an existential risk.
Nuclear war in the next hundred years is shaping up to be 2nd rank nations duking it out with 20kilo-ton fission weapons, not 200kilo-ton fusion weapons. That is enough to change building codes in ways that make current earthquake precautions seem cheap, but it is a long way short of an existential risk.
Existential risk might be large, but it comes from the unkown unkowns, not the known unkowns, and not even kowning that we don’t know there is nothing useful we can do beyond maintaining a willingness to recognise a new danger if it makes its possibility known.
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
Robyn Dawes:
Seconded. Also see:
Nick Bostrom’s Existential Risks paper from 2002
Global Catastrophic Risks
(Agreed, though, that global warming isn’t a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as “unknown unknowns” any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)
I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.
And I dispute the characterization that, if I’m correct about the poster’s attitudes, they’re “carefully hiding conclusions [they] disagree with”; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.
I agree with your characterization of the risks of global warming and nuclear war. I get the impression that people allow the reasonably high probability of a few degrees of warming or a few nuclear attacks to unduly influence their estimates of the probability of true existential risk from these sources.
In both cases I’m much more receptive to discussions of harm reduction than to scaremongering about ‘the end of the world as we know it’. The twentieth century has quite a few examples of events that caused 10s of millions of deaths and yet did not represent existential risks. Moderate global warming or a few nuclear detonations in or over major cities would be highly disruptive events and would have a high cost in human lives and are certainly legitimate concerns but they are not existential risks and talking of them as such is unhelpful in my opinion.