Does immortality imply eternal existence in linear time?
The question is important, as it’s often used as an argument against idea of immortality, on the level of desirability as well as feasibility. It may result in less interest in radical life extension as “result will be the same”, we will die. Religion, on the other hand is not afraid to “sell” immortality, as it has God, who will solve all contradiction in immortality implementation. As a result, religion win on the market of ideas.
Immortality (by definition) is about not dying. The fact of eternal linear existence follows from it, seems to be very simple and obvious theorem:
“If I do not die in the time moment N and N+1, I will exist for any N”.
If we prove that immortality is impossible, then any life would look like: Now + unknown very long time + death. So, death is inevitable, and the only difference is the unknown time until it happens.
It is an unpleasant perspective, by the way.
So we have or “bad infinity”, or inevitable death. Both look unappealing. Both also look logically contradictory. “Infinite linear existence” requires infinite memory of observer, for example. “Death of observer” is also implies an idea of the ending of stream of experiences, which can’t be proved empirically, and from logical point of view is unproved hypothesis.
But we can change our point of view if we abandon the idea of linear time.
Physics suggests that near black holes closed time-like curves could be possible. https://en.wikipedia.org/wiki/Closed_timelike_curve (Idea of “Eternal recurrence” of Nietzsche is an example of such circle immortality.)
If I am in such a curve, my experiences may recur after, say, one billion years. In this case, I am immortal but have finite time duration.
It may be not very good, but it is just a starting point in considerations that would help lead us away from the linear time model.
There may be other configurations in non-linear time. Another obvious one is the merging of different personal timelines.
Another is the circular attractor.
Another is a combination of attractors, merges and circular timelines, which may result in complex geometry.
Another is 2 (or many)- dimensional time, with another perpendicular time arrow. It results in a time topology. Time could also include singularities, in which one has an infinite number of experiences in finite time.
We could also add here idea of splitting time in quantum multiverse.
We could also add an idea that there is a possible path between any two observer-moment, and given that infinitely many such paths exist in splitting multiverse, any observer has non zero probability to become any other observer, which results in tangle of time-like curves in the space of all possible minds.
Timeless physics ideas also give us another view on idea of “time” in which we don’t have “infinite time”, but not because infinity is impossible, but because there is no such thing as time.
TL;DR: The idea of time is so complex that we can’t state that immortality results in eternal linear existence. These two ideas may be true or false independently.
Also I have a question to the readers: If you think that superintelligence will be created, do you think it will be immortal, and why?
If we take “immortality” to mean “infinitely many distinct observer moments that are connect to me through moment-to-moment identity”, then yes, by Konig’s Lemma.
(Every infinite graph with finite-degree verticies has an infinite path)
(edit: hmmm, does many-worlds give you infinite-branching into distinct observer moments ?)
Can you elaborate on the concept of a connection through “moment-to-moment identity”? Would for example “mind uploading” break such a thing?
Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig’s Lemma applied :-)
If I were to try to make a more serious argument it would go something like this.
Defining identity, whether two entities are ‘the same person’ is hard. People have different intuitions. But most people would say that ‘your mind now’ and ‘your mind a few moments later’ are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than ‘observer moments’) with outgoing edges leading to mind states a few moments later.
That is kind of what I meant by “moment-by-moment” identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven’t thought about whether its a good one.
In the ordinary course of events these graphs aren’t very interesting, they’re just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.
So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.
Also, you could have two mind states that lead to the same successor mind state—for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.
(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier—trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you’d need very carefully apply selective amnesia to achieve it.)
So, if we say “immortality” is having an identity graph with an infinite number of mind-states all connected through the “moment-by-moment identity” relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore “eternal existence in linear time”.
Rather contrived, I know.
So, the graph model of identity sort of works, but I feel it doesn’t quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don’t think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn’t meld with intuition. For example, a person’s brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it’s being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let’s imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it’s still the same identity?
Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it’s off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what’s to say you couldn’t link a very, very dodgy ‘clone’ mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?
Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like ‘if I clone your mind, but then torture the clone, do you feel it?’ Well, if you’ve modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.
Given that the human mind has a finite number of states, any given linear-immortal being, at some point in time, becomes indistinguishable from immortality with finite time duration.
I could become non human with arbitrary large mind size. But if we suggest that a finite upper limit of possible mind exist, like no more than 1 exobyte, then your objection works.
If we suggest that there is no upper limits of complexity of possible minds and AIs, then your objection doesn’t work.
But some form of it may be still true. Because the larger the mind, the slower it works. Galaxy size mind would think one thought for 100 000 years. So it may result in some kind of limit of actual size of the minds, but more calculations are needed.
It is a strange (from my point of view) mind that which is appeased by the perspective of still dying, but an innumberable amount of times without remembering anything.
Immortalism is losing popularity within H+ due to its popular interpretation as ‘forced immortality’, a trope well explored in fiction. :/
And what is gaining traction? To become a God using FAI?
Asking me to sum up the current state of the H+ movement is tricky because I track the entire thing on H+Pedia.
Atheism is growing very strong, due to the influence of Zoltan Istvan, where as I believe Aubrey de Grey is toning down some of his immortalist rhetoric in favour of alternative terms like “Regenerative Medicine” and the like.
There are all kinds of trends I could comment on.
Thanks for links!
What does it mean to be immortal? We haven’t solved key questions of personal identity yet. What is it for one personal identity to persist?
It is good question. The problem of personal identity is one of most complex, like aging. I am working on the map of identity solutions, and it is very large.
If the decide that identity has definition I, the death os abrupt disappearance of I. And immortality is idea that death never happens. It seems that this definition of immortality doesn’t depends of definition of identity.
But practically the more fragile is identity, the more probable is death.
The thing is, I’m just not sure if it’s even a reasonable thing to talk about ‘immortality’ because I don’t know what it means for one personal identity (‘soul’) to persist. I couldn’t be sure if a computer simulated my mind it would be ‘me’, for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a similar person but not the one who originally went under the anaesthetic. So from the perspective of the original person, undergoing their operation was pointless, because they are dead anyway. The person who wakes from the operation is someone else entirely.
I guess I’m just trying to say that immortality makes heaps of sense if we can somehow solve the question of personal identity, but if we can’t, then ‘immortality’ may be pretty nonsensical to talk about, simply because if we cannot say what it takes for a single ‘soul’ to persist over time, the very concept of ‘immortality’ may be ill-defined.
I like your post about the heat death of the universe, if you ever figure anything out regarding the persistence of a personal identity, I’d like you to message me or something.
Isn’t it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn’t; or that a simulation of you either is or isn’t you; but there’s no objective right answer. It is worth nothing, though, that if you don’t tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.
If there’s no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of ‘you’ is not actually ‘you’, would seeking immortality mean we can’t upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?
If we found out that there’s a new ‘you’ every time you go to sleep and wake up, wouldn’t it make sense to abandon the quest for immortality as we already die every night?
(Note, I don’t actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)
If there’s no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of ‘you’ is not actually ‘you’, I guess you (‘you’?) will indeed need to find a way to extend your biological life. If you’re happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we’re not going to “find out” the right answer to those questions if there is no right answer.
Are you talking about the hard problem of consciousness? I’m mostly with Daniel Dennett here and think that the hard problem probably doesn’t actually exist (but I wouldn’t say that I’m absolutely certain about this), but if you think that the hard problem needs to be solved, then I guess this identity business also becomes somewhat more problematic.
I think consciousness arises from physical processes (as Denett says), but that’s not really solving the problem or proving it doesn’t exist.
Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it’s hard to say you are wrong. However, what if I don’t actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real ‘us’ existing and what don’t.
What if the persistence of personal identity is a meaningless pursuit?
Let’s suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn’t persist in such situations?
So, let’s say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that ‘you’?
If it is, let’s say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is ‘you’?
Let’s say the second one is ‘you’, and the first one isn’t. What happens when the computer reconstructs yet another exact copy of your brain?
If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you?
What if it was going to torture the exact copy of you, but only one of the exact copies? There’s a version of you not being tortured, what’s to say that won’t be the real ‘you’?
Maybe; it would probably think so, at least if it wasn’t told otherwise.
Both would probably think so.
All three might think so.
I find that a bit scary.
Wouldn’t there, then, be some copies of me not being tortured and one that is being tortured?
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
So go back to the scenario—you’re killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing ‘you’, so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?
Well.. Let’s say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t − 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?
Does it make any sense to you to say that you expect to experience both being and not being tickled?
The idea of immortality always brings up the logical holes in religious beliefs. For example, if God is immortal and all powerful, existing in the past present and future, he would by definition be out side of time. Why would a being outside of time care when or where we die or anything else that could happen to us. To it, we would never die and always be dead.
Plus, the 1st 100 years in heaven sounds good, but the next 2 billion.......
There is another important difference between words “immortality” and “indefinite life extension”. That is their relation to the hypothetical event of resurrection, like reconstruction of me by future AI. Immortality include it, but life extension seems to be speaking only about continuous existence.
Seems like nothing will ever be immortal. Second Law of Thermodynamics and all.
Immortality comes with all kinds of definitions, literal immortality sounds more of a type-X civilization problem to me :)
I discussed the ways how to survive the end of the universe elsewhere http://lesswrong.com/lw/mfa/a_roadmap_how_to_survive_the_end_of_the_universe/
I think “both look unappealing” because neither one makes a good story.
The right answer is to stop hoping that your life is going to make a good story: life is not a story at all.
I’ve heard it argued that given the assumption of infinitely divisible time, one can theoretically achieve all the purported benefits of immortality in a finite amount of time, using a derivative of Zeno’s paradox.
I think you may refer to Tippler Omega point. Also John Smart had similar ideas, as I remember, when he said that civilization would evolve to smaller and smaller entities, running on higher and higher speed. In result technological singularity will be also physical singularity.
Now we can’t say if time is infinitely divisible. Plank time may be the limit.
It would need an infinite amount of energy, though.
Does doing something in half the time take half the energy?
Depends on the something: flipping a bit faster and faster surely requires more and more energy (no system is a perfect solid, the speed the components need to develop doubles every time, etc.)
Probably you could get at least infinity energy density in the collapsing black hole,near its singularity
A more interesting question for me is that of a silent ‘t’: Does immortality imply immorality?
We may try to quantify it. If an agent is creating a virus, which have 10 per cent to give him immortality and 1 per cent to result in human extinction, is it moral to him to proceed? Clearly not, even from selfish point of view. If we have 1000 such agents, extinction is inevitable.
So clearly agressive and selfish quest for immortality is immoral and will convert a person in social cancer cell. But in reality the situation is opposite.
You need to give immortality to as many people as possible if you want it to be tested, cheap and predictable technology. Think about Iphone - it is cheap, high quality and reliable because of economy of scale.
So I think that fighting for life extension is second most important and positive thing after prevention x-risks (and it seems to be underestimated by EA.)