They predict that the catastrophic tipping points from climate change and perhaps other human-caused environmental changes will cause knock-on effects that eventually add up to our extinction, and the policy struggles to change that currently seem like we will not be able to pull them off despite observing clear initial consequences in terms of fire, storm, and ocean heating.
They model a full nuclear exchange in the context of a worldwide war as being highly possible and only narrowly evaded so far, and consider the consequences of that to cause or at least be as bad as extinction.
They are reasonably confident that pandemics arising or engineered without the help of AI could, in fact, take out our species under favorable circumstances, and worry the battlefield of public health is currently slipping towards the favor of diseases over time.
Probably smaller contributors going forward: They are familiar with other religious groups inclined to bring about the apocalypse and have some actual concern over their chance of success. (Probably U.S.-focused.)
They are looking at longer time frames, and are thinking of various catastrophes likely within the decades or centuries immediately after we would otherwise have developed AGI, some of them possibly caused by the policies necessary to not do so.
They think humans may voluntarily decide it is not worth existing as a species unless we make it worth their while properly, and should not be stopped from making this choice. Existence, and the world as it is for humans, is hell in some pretty important and meaningful ways.
They are not long-termists in any sense but stewardship, and are counting the possibility that everyone who exists and matters to them under a short-term framework ages and dies.
They consider most humans to currently be in a state of suffering worse than non-existence, the s-risk of doom is currently 100%, and the 60% not-doom is mostly optimism we can make that state better.
And overall, generally, a belief that not-doom is fragile; that species do not always endure; that there is no guarantee, and our genus happens to be into the dice-rolling part of its lifespan even if we weren’t doing various unusual things that might increase our risk as much as decrease. (Probably worth noting that several species of humans, our equals based on archaeological finds and our partners based on genomic, have gone extinct.)
Here’s a few possibilities:
They predict that the catastrophic tipping points from climate change and perhaps other human-caused environmental changes will cause knock-on effects that eventually add up to our extinction, and the policy struggles to change that currently seem like we will not be able to pull them off despite observing clear initial consequences in terms of fire, storm, and ocean heating.
They model a full nuclear exchange in the context of a worldwide war as being highly possible and only narrowly evaded so far, and consider the consequences of that to cause or at least be as bad as extinction.
They are reasonably confident that pandemics arising or engineered without the help of AI could, in fact, take out our species under favorable circumstances, and worry the battlefield of public health is currently slipping towards the favor of diseases over time.
Probably smaller contributors going forward: They are familiar with other religious groups inclined to bring about the apocalypse and have some actual concern over their chance of success. (Probably U.S.-focused.)
They are looking at longer time frames, and are thinking of various catastrophes likely within the decades or centuries immediately after we would otherwise have developed AGI, some of them possibly caused by the policies necessary to not do so.
They think humans may voluntarily decide it is not worth existing as a species unless we make it worth their while properly, and should not be stopped from making this choice. Existence, and the world as it is for humans, is hell in some pretty important and meaningful ways.
They are not long-termists in any sense but stewardship, and are counting the possibility that everyone who exists and matters to them under a short-term framework ages and dies.
They consider most humans to currently be in a state of suffering worse than non-existence, the s-risk of doom is currently 100%, and the 60% not-doom is mostly optimism we can make that state better.
And overall, generally, a belief that not-doom is fragile; that species do not always endure; that there is no guarantee, and our genus happens to be into the dice-rolling part of its lifespan even if we weren’t doing various unusual things that might increase our risk as much as decrease. (Probably worth noting that several species of humans, our equals based on archaeological finds and our partners based on genomic, have gone extinct.)