I broadly agree that we’re biased towards the past and against the future, although I think a large part of the latter is that we don’t like the uncertainty involved in it.
While the AI debate is well beyond the scope of this post, I will say that I would expect the future to continue getting weirder the more non-human processing capability exists, and I personally don’t expect this weirdness to be survivable past a certain threshold.
More generally, one of the implications is that one should expect negative news to be less informative than positive news, since there’s probably a gap between the negative things in reality, and the negative news you are hearing, and that there’s more negativity in your information sources than exists in real life.
I broadly agree that we’re biased towards the past and against the future, although I think a large part of the latter is that we don’t like the uncertainty involved in it.
While the AI debate is well beyond the scope of this post, I will say that I would expect the future to continue getting weirder the more non-human processing capability exists, and I personally don’t expect this weirdness to be survivable past a certain threshold.
More generally, one of the implications is that one should expect negative news to be less informative than positive news, since there’s probably a gap between the negative things in reality, and the negative news you are hearing, and that there’s more negativity in your information sources than exists in real life.
Agreed.