For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1⁄1,000,000. So, one can ask oneself, “would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?”, and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their post-AGI life, which comes down to cruxes like its length (could be billions of years!), and its value per unit time (which could be very positive or very negative).
Like, if I expect to live for a million years in a post-AGI world where I expect life to be much better than the life I’m leading right now, then skiing for a day would take away roughly one year away from my post-AGI life in expectation. I definitely don’t value skiing that much.
This gets a bit complicated for people who are not perfectly selfish, as there are cases where one can trade micromorts for happiness, happiness for productivity, and productivity for impact on other people. So for instance, someone who works on AI safety and really likes skiing might find it net-positive to incur the micromorts because the happiness gained from skiing makes them better at AI safety, and them being better at AI safety has huge positive externalities that they’re willing to trade their lifespan for. In effect, they would be decreasing the probability that they themselves live to AGI, while increasing the probability that they and other people (of which there are many) survive AGI when it happens.
I think a million years is a weird anchor, starting at 1020 to 1040 might be closer to the mark. Also, there is a multiplier from thinking faster as an upload, so that a million physical years becomes something like 1012 subjective years.
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1⁄1,000,000. So, one can ask oneself, “would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?”, and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their post-AGI life, which comes down to cruxes like its length (could be billions of years!), and its value per unit time (which could be very positive or very negative).
Like, if I expect to live for a million years in a post-AGI world where I expect life to be much better than the life I’m leading right now, then skiing for a day would take away roughly one year away from my post-AGI life in expectation. I definitely don’t value skiing that much.
This gets a bit complicated for people who are not perfectly selfish, as there are cases where one can trade micromorts for happiness, happiness for productivity, and productivity for impact on other people. So for instance, someone who works on AI safety and really likes skiing might find it net-positive to incur the micromorts because the happiness gained from skiing makes them better at AI safety, and them being better at AI safety has huge positive externalities that they’re willing to trade their lifespan for. In effect, they would be decreasing the probability that they themselves live to AGI, while increasing the probability that they and other people (of which there are many) survive AGI when it happens.
I think a million years is a weird anchor, starting at 1020 to 1040 might be closer to the mark. Also, there is a multiplier from thinking faster as an upload, so that a million physical years becomes something like 1012 subjective years.