Almost everyone responding (75%) believes there’s at least a 10% chance of a 90% culling of human population sometime in the next 90 years.
If we’re right, it’s incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven’t heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.
Obviously it depends on the specific sacrifice. I absolutely hope we don’t create a climate where it’s impossible to effectively argue against stupid signalling-we-care policies, or where magical thinking automatically credits [sacrifice] with [intended result].
I agree that we shouldn’t seek to impose or adopt measures that are ineffective. It’s puzzling to me that I’ve thought so little about this. Probably 1) it’s hard to predict the future; I don’t like being wrong 2) maybe my conclusions would impel me to do something; doing something is hard 3) people who do nothing but talk about how great things would be if they were in charge—ick! (see also Chesterton’s Fence).
But I don’t have to gain power enough to save the world before it’s worth thinking without reservation or aversion about what needs doing. (Chesterton again: “If a thing is worth doing, it is worth doing badly.”).
An important point that I had intended the grandparent to point at, but on reflection I realize wasn’t clear, is that not all of that 10% corresponds to a single type of cataclysm. Personally, I’d put much of the mass in “something we haven’t foreseen.”
Almost everyone responding (75%) believes there’s at least a 10% chance of a 90% culling of human population sometime in the next 90 years.
If we’re right, it’s incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven’t heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.
What makes you think that sacrificing freedom will reduce this risk, rather than increase it?
Obviously it depends on the specific sacrifice. I absolutely hope we don’t create a climate where it’s impossible to effectively argue against stupid signalling-we-care policies, or where magical thinking automatically credits [sacrifice] with [intended result].
If we have any sense of particular measures we can take that will significantly reduce that probability.
I agree that we shouldn’t seek to impose or adopt measures that are ineffective. It’s puzzling to me that I’ve thought so little about this. Probably 1) it’s hard to predict the future; I don’t like being wrong 2) maybe my conclusions would impel me to do something; doing something is hard 3) people who do nothing but talk about how great things would be if they were in charge—ick! (see also Chesterton’s Fence).
But I don’t have to gain power enough to save the world before it’s worth thinking without reservation or aversion about what needs doing. (Chesterton again: “If a thing is worth doing, it is worth doing badly.”).
An important point that I had intended the grandparent to point at, but on reflection I realize wasn’t clear, is that not all of that 10% corresponds to a single type of cataclysm. Personally, I’d put much of the mass in “something we haven’t foreseen.”