I recently learned that when they talk about a life expectancy of eg. 80 years, it’s assuming that the future looks similar to the past. But with exponential progress in technology, that doesn’t seem like a good assumption. For example, according to Bostrom’s survey (90% likelihood), the median pessimistic year for AGI is 2075.
So then, taking into account technological progress, what do we expect life expectancy to be?
It is infinite. Imagine that FAI will appear in our time, that it will have human immortality as its final goal and that it will also will be able to solve the problem of the end of the universe. All these combined has, say, 1 per cent probability. Thus we have 1 per cent chance of infinite life expectancy, of which median is also infinity.
The life expectancy is infinite, but the median is finite unless the probability of immortality is at least 50%.
Yes. But if we also add quantum immortality here, we could ignore all finite branches. However, there is a logical uncertainty about the validity QI, and it has less than 50 per cent chances to be true.
Well, we can avoid the debate about quantum immortality if we specify that we’re talking about lifespan from the perspective of a 3rd party observer. After all, the OP is talking about the effect of technological progress, whereas if you accept quantum immortality then you would have accepted it even without progress.
The universe is not actually infinite.
And even if it were, there could be other things out there that could stop an FAI.
And even if there are not, people could still choose to die with a certain probability over time that leads to a finite average.
And even if not it could turn out that we might want to value life years lived with a subjective weight that falls over time causing the average to be finite.
One could get infinite life expectancy in a finite universe if his timeline is circular. But the capability to survive the end of the universe also assume the capability to overcome its finite state. And if mathematical universe theory is true, it is infinite. Even if the universe is finite, there are ways to leave it, like acasual trade with other universes.
Will these other things stop all FAIs or some share of them will survive?
Some FAIs may solve suicide by dividing a person’s mind into two parts: a part which wants to live and which doesn’t want. The second one will be terminated.
If FAI appears, it could improve quality of life, so it will be more and more interesting every year. Or it could find other ways to solve finiteness of expected utility. Anyway, utility was not a part of the OP question.
Good point. One point against this would be that upon reflection, I expect that human immortality is not likely to be optimal in most ways we may imagine it. I expect that on most likely consequentialist framings, the resources that could be spent on continuing my own “individual self” would be more effectively used elsewhere. You might need a very liberal notion of “self” to consider what gets kept as “you.”
That said, this wouldn’t be a bad thing, it would be more of a series of obvious decisions and improvements.
That assumes some kind of impartial utility function. I believe that, to the extent people consciously endorse such preferences, it is self-deception. We are selfish-ish creatures, and if we control the AI in a meaningful sense, we will probably choose to live forever (or at least very long) rather than use those resources in some “better” way.
Thanks for your take on this. I think our intuitions here differ a fair bit.
I find it difficult to reason about what human brains will do once they are uploaded or whatever and dramatically altered. Many of the things we’re used to now may change dramatically. It may be fair to consider that many kinds of “uploaded and modified humans” will become as different to modern humans as we are to simple algorithms or insects.
It could also be that some people will choose to “live forever”, but many others will be choose to be replaced.
Well, anything can happen if we get arbitrarily altered, but as long as the alterations are in themselves an expression of our preferences, I stick with my prediction.
I’ve heard it said (e.g., Sinclair) that the current life expectancy of the human body is more like 120 years. I think the 80 year number references is more like statistical measure of how long a human lives given all the things that do kill us in the world—internal genetic flaws, biologic pathogens that attack us, accidents, life style impacts....
Accepting AGI 2075 seems like the first question to ask is will that help address “the stuff that kills us now” or address that and the way our biology seems to work?
The expectation is probably around 1 billion:
10% × 10 billion years (live roughly as long as the universe has existed for already) +
90% × die within 1000 years (likely within 70 for most people here!)
Total: 1 billion
I am ignoring the infinite possibilities since any finite system will start repeating configurations eventually, so I don’t think that infinite life for a human even makes sense (you’d just be rerunning the life you already have and I don’t think that counts).
Infinite reruning makes sense as there is no moment of dying
What do you mean × die within 1000 years. Like the amount of people that die within that amount of time or the age that people die around that time?