I’d say the bigger reason people like to think that some apocalypse is going to happen in the future is because we have a bias towards viewing things negatively, even if they mislead us. In particular, we have a tendency to believe that today is a wasteland according to our values, compared with the past, which naturally is drawn to golden ages. Unfortunately, those golden ages were fictional, not real, and even from at least some people’s values, things have improved, not declined.
One implication is that one should treat things going worse claims as less evidence than things are going better claims, and in particular there’s an a priori reason to distrust negative claims but not positive claims.
This is a part of why I suspect AI existential risk is wrong, and negativity bias is why I suspect people would fall for a false existential risk claim.
I broadly agree that we’re biased towards the past and against the future, although I think a large part of the latter is that we don’t like the uncertainty involved in it.
While the AI debate is well beyond the scope of this post, I will say that I would expect the future to continue getting weirder the more non-human processing capability exists, and I personally don’t expect this weirdness to be survivable past a certain threshold.
More generally, one of the implications is that one should expect negative news to be less informative than positive news, since there’s probably a gap between the negative things in reality, and the negative news you are hearing, and that there’s more negativity in your information sources than exists in real life.
I’d say the bigger reason people like to think that some apocalypse is going to happen in the future is because we have a bias towards viewing things negatively, even if they mislead us. In particular, we have a tendency to believe that today is a wasteland according to our values, compared with the past, which naturally is drawn to golden ages. Unfortunately, those golden ages were fictional, not real, and even from at least some people’s values, things have improved, not declined.
The archived article is below:
https://archive.is/rQPwa
One implication is that one should treat things going worse claims as less evidence than things are going better claims, and in particular there’s an a priori reason to distrust negative claims but not positive claims.
This is a part of why I suspect AI existential risk is wrong, and negativity bias is why I suspect people would fall for a false existential risk claim.
I broadly agree that we’re biased towards the past and against the future, although I think a large part of the latter is that we don’t like the uncertainty involved in it.
While the AI debate is well beyond the scope of this post, I will say that I would expect the future to continue getting weirder the more non-human processing capability exists, and I personally don’t expect this weirdness to be survivable past a certain threshold.
More generally, one of the implications is that one should expect negative news to be less informative than positive news, since there’s probably a gap between the negative things in reality, and the negative news you are hearing, and that there’s more negativity in your information sources than exists in real life.
Agreed.