Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.
However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?
That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don’t think that’s quite right.
If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.
Having children younger than you doesn’t change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)
Discounting also reduces your valuation of descendents because everything you are—genetics and culture—a certain percentage is lost with each year. This is I believe the “value drift” argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.
Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.
However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?
That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don’t think that’s quite right.
If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.
Having children younger than you doesn’t change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)
Discounting also reduces your valuation of descendents because everything you are—genetics and culture—a certain percentage is lost with each year. This is I believe the “value drift” argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.