Assuming you have a >10% of living forever, wouldn’t that necessitate avoiding all chance at accidental death to minimize the “die before AGI” section. If you assume AGI is inevitable, then one should simply maximize risk aversion to prevent cessation of consciousness or at least permanent information loss of their brain.
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1⁄1,000,000. So, one can ask oneself, “would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?”, and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their post-AGI life, which comes down to cruxes like its length (could be billions of years!), and its value per unit time (which could be very positive or very negative).
Like, if I expect to live for a million years in a post-AGI world where I expect life to be much better than the life I’m leading right now, then skiing for a day would take away roughly one year away from my post-AGI life in expectation. I definitely don’t value skiing that much.
This gets a bit complicated for people who are not perfectly selfish, as there are cases where one can trade micromorts for happiness, happiness for productivity, and productivity for impact on other people. So for instance, someone who works on AI safety and really likes skiing might find it net-positive to incur the micromorts because the happiness gained from skiing makes them better at AI safety, and them being better at AI safety has huge positive externalities that they’re willing to trade their lifespan for. In effect, they would be decreasing the probability that they themselves live to AGI, while increasing the probability that they and other people (of which there are many) survive AGI when it happens.
I think a million years is a weird anchor, starting at 1020 to 1040 might be closer to the mark. Also, there is a multiplier from thinking faster as an upload, so that a million physical years becomes something like 1012 subjective years.
If you think the coming of AGI is inevitable, but you think that surviving AGI is hard and you might be able to help with it, then you should do everything you can to make the transition to a safe AGI future go well. Including possibly sacrificing your own life, if you value the lives of your loved ones in aggregate more than your life alone. In a sense, working hard to make AGI go well is ‘risk aversion’ on a society-wide basis, but I’d call the attitude of the agentic actors in this scenario more one of ‘ambition maximizing’ rather than ‘personal risk aversion’.
Assuming you have a >10% of living forever, wouldn’t that necessitate avoiding all chance at accidental death to minimize the “die before AGI” section. If you assume AGI is inevitable, then one should simply maximize risk aversion to prevent cessation of consciousness or at least permanent information loss of their brain.
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1⁄1,000,000. So, one can ask oneself, “would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?”, and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their post-AGI life, which comes down to cruxes like its length (could be billions of years!), and its value per unit time (which could be very positive or very negative).
Like, if I expect to live for a million years in a post-AGI world where I expect life to be much better than the life I’m leading right now, then skiing for a day would take away roughly one year away from my post-AGI life in expectation. I definitely don’t value skiing that much.
This gets a bit complicated for people who are not perfectly selfish, as there are cases where one can trade micromorts for happiness, happiness for productivity, and productivity for impact on other people. So for instance, someone who works on AI safety and really likes skiing might find it net-positive to incur the micromorts because the happiness gained from skiing makes them better at AI safety, and them being better at AI safety has huge positive externalities that they’re willing to trade their lifespan for. In effect, they would be decreasing the probability that they themselves live to AGI, while increasing the probability that they and other people (of which there are many) survive AGI when it happens.
I think a million years is a weird anchor, starting at 1020 to 1040 might be closer to the mark. Also, there is a multiplier from thinking faster as an upload, so that a million physical years becomes something like 1012 subjective years.
If you think the coming of AGI is inevitable, but you think that surviving AGI is hard and you might be able to help with it, then you should do everything you can to make the transition to a safe AGI future go well. Including possibly sacrificing your own life, if you value the lives of your loved ones in aggregate more than your life alone. In a sense, working hard to make AGI go well is ‘risk aversion’ on a society-wide basis, but I’d call the attitude of the agentic actors in this scenario more one of ‘ambition maximizing’ rather than ‘personal risk aversion’.