There are a number of different reasons different people give as to why uploading is good.
For example, I do see making copies of myself as a good and positive goal. If there are two of me, all other things being equal, I am twice as well off—regardless of whether or not I have any interaction with myselves. I am a really good thing and there should be more of me.
Some people, on the other hand, either subconsciously assume or actively desire a destructive upload—they have one copy of themselves, the software copy, and that’s all they want. The meat body is either trash to be disposed of or simply not considered.
Closely related, some people conceive of a unitary selfhood as a inherently valuable thing, but also want access to a meat body in some form. In this case, duplication/destruction is a problem to be solved—the meat body might be disposed of, might be disposed of but DNA kept for potential reanimation, might be kept in cold storage… If we go by published science fiction, this seems to be the most common model. This is an interesting case in which the meat body (and perhaps the brain specifically) is often seen as a good and desirable thing, and in some cases the point of uploading is only that it is a useful way of insuring immortality (John Varley style transhumanism).
With so many mental models to choose from, it is not surprising that anyone who does not want to think about a lonely meatbody wasting away on a dying Earth just doesn’t bother to consider the problem. It’s an easy issue to ignore, when most people are still doubtful that uploading will be a possibility in their lifetime.
However, I think in most cases, people who think about uploading see it as “better than dying”, while at the same time acknowledging your concern that really, someone called you is dying. Whether they see this as a personal death (as you do) or statistical death (“half of me’s just died!”) probably has no more ontological fact behind it than whether or not you have a soul… but of course, there are plenty of people who are willing to argue for hours on either of those points :-)
But’s it’s precisely this reference to “my meatbody” and “my computer body” or whatever that confuses me. When you upload, a new consciousness is created, right? You don’t have two bodies, you just have a super-doppleganger.
He can suffer while I don’t, and vice versa. And he can die, and I’ll go on living. And I can still die just as much as before, while the other goes on living.
I don’t understand what about this situation would make me okay with dying.
So I could understand it as valuable to someone for other reasons, but I don’t understand its presentation as a life extension technology.
My understanding is that LWers do not believe in a permanent consciousness.
A teleporter makes a clone of you with identical brain patterns: did it get a new consciousness, how do you tell your consciousness didn’t go to the clone, where does the consciousness lies, is it real, etc.
It’s not real, therefore the clone is literally you.
I understand what you are saying, and I think that most people would agree with your analysis (at least, once it is explained to them). But I also think that it is not entirely coherent. For example, imagine that we had the technology to replace neurons with nanocircuits. We inject you with nanobots and slowly, over the course of years, each of your brain cells are replaced with electronic equivalents. This happens so slowly that you do not even notice—conscious is maintained unbroken. Then, one at a time, the circuits and feedback loops are optimized; this you do notice, as you get a better memory and you can think faster and more clearly; throughout this, however, your consciousness is maintained unbroken. Then your memory is transcribed onto a more efficient storage medium (still connected to your brain, and with no downtime). You can see where this is going. There is no point where it is clear that one you ceases and another begins, but at the end of the process you are a ‘computer body’. Moreover, while I set this up to happen over years, there’s no obvious reason that you couldn’t speed the example up to take seconds.
Wizard has given another example; most of us accept Star Trek style transporters as a perfectly reasonable concept (albeit maybe impossible in practice), but when you look at them closely they present exactly the sort of moral/ontological dilemma you are worried about.This suggests that we do not fully grok even our own concept of personal identity.
One solution, is to conclude that, after much thought, if you cannot define a consistent concept of persistence of personal identity over time, perhaps this is because it is not an intellectual concept, but a lizard-brain panic caused by the mention of death.
In my mind this is exactly the same sort of debate people have over free will. The concept makes no real sense as an ontological concept, but it is one so deeply ingrained in our culture that it takes a lot of thought to accept that.
So if uploading was followed by guillotining the “meatbody,” would you sign up?
I have no problem with the brain just being one kind of hardware you can run a consciousness on. I have no problem with transporting the mind from one hardware to another, instantaneously, if you can do it in between the neural impulses.
But it seems like people mean you get scanned, a second, fully “real,” person comes into existence, and this is supposed to extend your life.
Are we to believe that the new consciousness would be fine with being killed, just because you would still be around afterwards? Would their life be extended in you even if they were deleted after being created? Are they going to stick around feeling and experiencing life because you exist?
My confusion is that these seem like obvious points. Why are people even taking this seriously, why is it on the list?
I can fully understand why the rest of us might like to upload the great people of the world, or maybe everybody if we value having them around. But I don’t think this should make them feel indifferent to their deaths, because it’s not extending anyone’s life.
I put this in the open thread because I assumed I was just ignorant of some key part of the process. If this is really it, maybe these points should be their own post and we can kick uploading off the life extension possibility list.
I would not signup for a destructive upload unless I was about to die. But if I was convinced that I was about to die, then I absolutely would.
I don’t think that you are missing anything, really. If I uploaded the average transhumanist, and then asked the meatbody (with mind intact) what I should do with the meatbody, they’d say either to go away and leave them alone or to upload them a few more times, please. If I asked them if they were happy to have a copy uploaded, they would say yes. If I asked them if they were disappointed that they were the meatbody version of themselves, they’d say yes. If I asked if the meatbody would now like an immortality treatment, they would say yes. If I asked the uploaded copy if they wanted the meatbody to get the immortality treatment, they would say yes.… I think.
I think that uploading is on the list primarily because there is a lot of skepticism that the original human brain can last much more than ~150 years. Whether or not this skepticism is justified is still an open question.
Uploading may also get a spot on the list because if you can accept a destructive upload, then your surviving self does get (at least theoretically) a much much better life than is likely to be possible on meatEarth.
If you accept this solution, however, you might also say that neither uploading nor life extension technology in general is actually necessary, because many other things, such as having children, are just as good objectively, even if your lizard-brain panic caused by the mention of death doesn’t agree.
I like children and want children that are as cool as I am. But no child of mine has a statistically significant chance of being me.
“Just as good objectively” misses the point on two counts:
Lots of things are as good as other things. But just because tiramisu is just as good as chocolate mousse, this does not mean that it is okay to get rid of chocolate mousse. What might make it okay to get rid of chocolate mousse is if you had another dish that tasted exactly like chocolate mousse, to the point that the only way you could tell which is which was by looking at the dish it was in.
This is not a question of objectivity—this is a question of managing your own subjective feelings. I may well find that I am best off if I keep my highly subjective view that I am one of the most important people in my world, but also be better off if I rejected my subjective view that meatbody death is the same as death of me.
The point is that “so and so is me” is never an objective fact at all. So if the child has no chance of being you, neither does the upload. If you are saying that you can identify with the upload, that is not in any objective sense different from identifying with your child, or identifying with some random future human and calling that a reincarnated version of yourself.
And I don’t object to any of that; I think it may well be true that it is objectively just as good to have a child and then to die, as to continue to live yourself, or to upload yourself and die bodily. As you say, the real issue is managing your feelings, and it is just a question of what works for you. There is no reason to argue that having children shouldn’t be a reasonable strategy for other people, even if it is not for you.
Granted, and particularly true, I’d like to think, for rationalists.
It is reasonable to argue that any social/practical aspect of yourself also exists in others, and that the most rational thing to do is to a) confirm that this is a objectively good thing and b) work to spread it throughout the population. This is a good reason to view works of art, scientific work, and children as valid forms of immortality. This is particularly useful to focus on if you expect to die before immortality breakthroughs happen, but as a general outlook on life it might be more socially (and economically) productive than any other. As some authors have pointed out, immortality of the individual might equal the stagnation of society.
Accepting death of the self might be the best way forward for society, but it is a hard goal to achieve.
There are a number of different reasons different people give as to why uploading is good.
For example, I do see making copies of myself as a good and positive goal. If there are two of me, all other things being equal, I am twice as well off—regardless of whether or not I have any interaction with myselves. I am a really good thing and there should be more of me.
Some people, on the other hand, either subconsciously assume or actively desire a destructive upload—they have one copy of themselves, the software copy, and that’s all they want. The meat body is either trash to be disposed of or simply not considered.
Closely related, some people conceive of a unitary selfhood as a inherently valuable thing, but also want access to a meat body in some form. In this case, duplication/destruction is a problem to be solved—the meat body might be disposed of, might be disposed of but DNA kept for potential reanimation, might be kept in cold storage… If we go by published science fiction, this seems to be the most common model. This is an interesting case in which the meat body (and perhaps the brain specifically) is often seen as a good and desirable thing, and in some cases the point of uploading is only that it is a useful way of insuring immortality (John Varley style transhumanism).
With so many mental models to choose from, it is not surprising that anyone who does not want to think about a lonely meatbody wasting away on a dying Earth just doesn’t bother to consider the problem. It’s an easy issue to ignore, when most people are still doubtful that uploading will be a possibility in their lifetime.
However, I think in most cases, people who think about uploading see it as “better than dying”, while at the same time acknowledging your concern that really, someone called you is dying. Whether they see this as a personal death (as you do) or statistical death (“half of me’s just died!”) probably has no more ontological fact behind it than whether or not you have a soul… but of course, there are plenty of people who are willing to argue for hours on either of those points :-)
But’s it’s precisely this reference to “my meatbody” and “my computer body” or whatever that confuses me. When you upload, a new consciousness is created, right? You don’t have two bodies, you just have a super-doppleganger. He can suffer while I don’t, and vice versa. And he can die, and I’ll go on living. And I can still die just as much as before, while the other goes on living. I don’t understand what about this situation would make me okay with dying.
So I could understand it as valuable to someone for other reasons, but I don’t understand its presentation as a life extension technology.
My understanding is that LWers do not believe in a permanent consciousness.
A teleporter makes a clone of you with identical brain patterns: did it get a new consciousness, how do you tell your consciousness didn’t go to the clone, where does the consciousness lies, is it real, etc.
It’s not real, therefore the clone is literally you.
Either that or we’re dying every second.
I understand what you are saying, and I think that most people would agree with your analysis (at least, once it is explained to them). But I also think that it is not entirely coherent. For example, imagine that we had the technology to replace neurons with nanocircuits. We inject you with nanobots and slowly, over the course of years, each of your brain cells are replaced with electronic equivalents. This happens so slowly that you do not even notice—conscious is maintained unbroken. Then, one at a time, the circuits and feedback loops are optimized; this you do notice, as you get a better memory and you can think faster and more clearly; throughout this, however, your consciousness is maintained unbroken. Then your memory is transcribed onto a more efficient storage medium (still connected to your brain, and with no downtime). You can see where this is going. There is no point where it is clear that one you ceases and another begins, but at the end of the process you are a ‘computer body’. Moreover, while I set this up to happen over years, there’s no obvious reason that you couldn’t speed the example up to take seconds.
Wizard has given another example; most of us accept Star Trek style transporters as a perfectly reasonable concept (albeit maybe impossible in practice), but when you look at them closely they present exactly the sort of moral/ontological dilemma you are worried about.This suggests that we do not fully grok even our own concept of personal identity.
One solution, is to conclude that, after much thought, if you cannot define a consistent concept of persistence of personal identity over time, perhaps this is because it is not an intellectual concept, but a lizard-brain panic caused by the mention of death.
In my mind this is exactly the same sort of debate people have over free will. The concept makes no real sense as an ontological concept, but it is one so deeply ingrained in our culture that it takes a lot of thought to accept that.
So if uploading was followed by guillotining the “meatbody,” would you sign up?
I have no problem with the brain just being one kind of hardware you can run a consciousness on. I have no problem with transporting the mind from one hardware to another, instantaneously, if you can do it in between the neural impulses.
But it seems like people mean you get scanned, a second, fully “real,” person comes into existence, and this is supposed to extend your life.
Are we to believe that the new consciousness would be fine with being killed, just because you would still be around afterwards? Would their life be extended in you even if they were deleted after being created? Are they going to stick around feeling and experiencing life because you exist?
My confusion is that these seem like obvious points. Why are people even taking this seriously, why is it on the list?
I can fully understand why the rest of us might like to upload the great people of the world, or maybe everybody if we value having them around. But I don’t think this should make them feel indifferent to their deaths, because it’s not extending anyone’s life.
I put this in the open thread because I assumed I was just ignorant of some key part of the process. If this is really it, maybe these points should be their own post and we can kick uploading off the life extension possibility list.
I would not signup for a destructive upload unless I was about to die. But if I was convinced that I was about to die, then I absolutely would.
I don’t think that you are missing anything, really. If I uploaded the average transhumanist, and then asked the meatbody (with mind intact) what I should do with the meatbody, they’d say either to go away and leave them alone or to upload them a few more times, please. If I asked them if they were happy to have a copy uploaded, they would say yes. If I asked them if they were disappointed that they were the meatbody version of themselves, they’d say yes. If I asked if the meatbody would now like an immortality treatment, they would say yes. If I asked the uploaded copy if they wanted the meatbody to get the immortality treatment, they would say yes.… I think.
I think that uploading is on the list primarily because there is a lot of skepticism that the original human brain can last much more than ~150 years. Whether or not this skepticism is justified is still an open question.
Uploading may also get a spot on the list because if you can accept a destructive upload, then your surviving self does get (at least theoretically) a much much better life than is likely to be possible on meatEarth.
If you accept this solution, however, you might also say that neither uploading nor life extension technology in general is actually necessary, because many other things, such as having children, are just as good objectively, even if your lizard-brain panic caused by the mention of death doesn’t agree.
I like children and want children that are as cool as I am. But no child of mine has a statistically significant chance of being me.
“Just as good objectively” misses the point on two counts:
Lots of things are as good as other things. But just because tiramisu is just as good as chocolate mousse, this does not mean that it is okay to get rid of chocolate mousse. What might make it okay to get rid of chocolate mousse is if you had another dish that tasted exactly like chocolate mousse, to the point that the only way you could tell which is which was by looking at the dish it was in.
This is not a question of objectivity—this is a question of managing your own subjective feelings. I may well find that I am best off if I keep my highly subjective view that I am one of the most important people in my world, but also be better off if I rejected my subjective view that meatbody death is the same as death of me.
Etid: tpos.
The point is that “so and so is me” is never an objective fact at all. So if the child has no chance of being you, neither does the upload. If you are saying that you can identify with the upload, that is not in any objective sense different from identifying with your child, or identifying with some random future human and calling that a reincarnated version of yourself.
And I don’t object to any of that; I think it may well be true that it is objectively just as good to have a child and then to die, as to continue to live yourself, or to upload yourself and die bodily. As you say, the real issue is managing your feelings, and it is just a question of what works for you. There is no reason to argue that having children shouldn’t be a reasonable strategy for other people, even if it is not for you.
Granted, and particularly true, I’d like to think, for rationalists.
It is reasonable to argue that any social/practical aspect of yourself also exists in others, and that the most rational thing to do is to a) confirm that this is a objectively good thing and b) work to spread it throughout the population. This is a good reason to view works of art, scientific work, and children as valid forms of immortality. This is particularly useful to focus on if you expect to die before immortality breakthroughs happen, but as a general outlook on life it might be more socially (and economically) productive than any other. As some authors have pointed out, immortality of the individual might equal the stagnation of society.
Accepting death of the self might be the best way forward for society, but it is a hard goal to achieve.