How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 − 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?
3p1cd3m0n
Is there any justification for the leverage penalty? I understand that it would apply if there were a finite number of agents, but if there’s an infinite number of agents, couldn’t all agents have an effect on an arbitrarily larger number of other agents? Shouldn’t the prior probability instead be P(event A | n agents will be effected) = (1 / n) + P(there being infinite entities)? If this is the case, then it seems the leverage penalty won’t stop one from being mugged.
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?
Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I’ve found none, but I’d like to know because I’m considering developing AGI as a career.
Edit: What about AI that’s not AGI?
I see what you mean. I don’t really know enough about Pascal’s mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it’s a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.
I don’t think decreasing existential risk falls into it, because the probability of an existential catastrophe isn’t extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal’s mugging.
For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as:
Even if we use the most conservative of [estimates of the utility of decreasing existential risk], which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. (Bostrom, Existential Risk Prevention as Global Priority)
For one, Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven’t thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.