How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 − 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?
I’m having trouble imagining how risk would ever go down, sans entering a machine-run totalitarian state, so I clearly don’t have the same assessment of bad things happening “sooner rather than later”. I can’t imagine a single dangerous activity that is harder or less dangerous now than it was in the past, and I suspect this will continue. The only things that will happen sooner than later are establishing stable and safe equilibria (like post-Cold War nuclear politics). If me personally being alive meaningfully effects an equilibrium (implicit or explicit) then Humanity is quite completely screwed.
For one, Yudkowsky in Artificial Intelligence as a Positive and
Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven’t thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.
How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 − 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?
I’m having trouble imagining how risk would ever go down, sans entering a machine-run totalitarian state, so I clearly don’t have the same assessment of bad things happening “sooner rather than later”. I can’t imagine a single dangerous activity that is harder or less dangerous now than it was in the past, and I suspect this will continue. The only things that will happen sooner than later are establishing stable and safe equilibria (like post-Cold War nuclear politics). If me personally being alive meaningfully effects an equilibrium (implicit or explicit) then Humanity is quite completely screwed.
For one, Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven’t thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.