So if I’m parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways: it doesn’t have a soul, and consequently is denied the possibility of heaven, it doesn’t have a sense of smell, taste, hearing, sight, or touch, it doesn’t have my hands, or perhaps hands at all, it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn’t be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I’m not exactly sure what your reasons for believing all of that are… they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
it doesn’t have a soul, and consequently is denied the possibility of heaven
More like “it doesn’t have a soul, therefore there’s nothing to send to heaven”.
(roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties)
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don’t expect them to feel. If I asked a philosopher what it’s like for a bat to be a bat, they’d understand the allusion I’d like to make here, but that’s awfully jargony. Here’s an explanation of the concept I’m trying to convey.
I don’t know whether that’s something you’ve overlooked or whether I’m asking a wrong question.
If it helps, I’ve read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on “What is it like to bat a bee?” in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we’re having.
I mean, I’m willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I’m happy to posit that there’s some kind of constraint—label it X for now—such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it’s possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that’s fine. It’s just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn’t matter very much.
That’s why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.
“IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.”
Does it matter?
What if we can run some bunch of algorithms on a computer that pass the turing test but are provably non-sentient?
When it comes down to it we’re looking for something that can solve generalized problems willingly and won’t deliberately try to kill us.
It’s like the argument against catgirls. Some people would prefer to have human girls/boys but trust me sometimes a catgirl/boy would be better.
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know
AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
You make sense. I’m starting to think a computer could potentially be sentient. Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
I personally believe that humans are likewise machines, generally made of meat, that run “programs”. I put the word “programs” in scare-quotes because our programs are very different in structure from computer programs, though the basic concept is the same.
What we have in common with computers, though, is that our programs are self-modifying. We can learn, and thus change our own code. Thus, I see no categorical difference between humans and computers, though obviously our current computers are far inferior to humans in many (though not all) areas.
That’s a perfectly workable model of a computer for our purposes, though if we were really going to get into this we’d have to further explore what a circuit is.
Personally, I’ve pretty much given up on the word “sentient”… in my experience it connotes far more than it denotes, such that discussions that involve it end up quickly reaching the point where nobody quite knows what they’re talking about, or what talking about it entails. I have the same problem with “qualia” and “soul.” (Then again, I talk comfortably about something being or not being a person, which is just as problematic, so it’s not like I’m consistent about this.)
But that aside, yeah, if any physical thing can be sentient, then I don’t see any principled reason why a computer can’t be. And if I can be implemented in a physical thing at all, then I don’t see any principled reason why I can’t be implemented in a computer.
Also (getting back to an earlier concern you expressed), if I can be implemented in a physical thing, I don’t see any principled reason why I can’t be implemented in two different physical things at the same time.
I agree Dave. Also I’ll go further. For my own personal purposes I care not a whit if a powerful piece of software passes the Turing test, can do cool stuff, won’t kill me but it’s basically an automaton.
I would go one step further, and claim that if a piece of software passes the general Turing test—i.e., if it acts exactly like a human would act in its place—then it is not an automaton.
And I’d say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I’d say no, but the point is irrelevant.
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn’t you want to know whether you actually got your own car back ?
I have to admit, your response kind of mystified me, so now I’m intrigued.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
In the case of the simulated porn actress, I wouldn’t really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.
That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I’d be evil in the first place because I’d be coercing her to do evil stuff in my personal simulation but I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck” so we’re back to the beginning again and THUS I say “it’s irrelevant”.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
My primary concern in a situation like this is that she’d be kidnapped and presumably extremely not happy about that.
If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that’s just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn’t particularly bother me. (I suppose this sounds bold, but I’m almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their “deaths” would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
The closest analogy I can think of is if I lived in a culture where families only had one child each, and was suddenly introduced to brothers. It would be strange to find two people who shared parents, a childhood environment, and so forth—attributes I was accustomed to treating as uniquely associated with a person, but it turned out I was wrong to do so. It would be disconcerting, but I expect I’d get used to it.
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
If you count a fertilized egg as a person, then two identical twins did use to be the same person. :-)
While I don’t doubt that many people would be OK with this I wouldn’t because of the lack of certainty and provability.
My difficulty with this concept goes further.
Since it’s not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
“Oh but the copies running in a simulation are the same thing as the originals really”, protests the AI after all the humans have been destructively scanned and copied into a simulation...
1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is. But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here:
Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary.
I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins.
Though their memories are very limited they are legally different people.
My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral.
Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back—wouldn’t you be ?
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
If the copy was so perfect that you couldn’t tell that it wasn’t your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ?
I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck”
I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
Would I believe? I think the answer would depend on whether I could find the original or not.
I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it’s ethically questionable.
But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Would I believe? I think the answer would depend on whether I could find the original or not.
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible.
I would, however, find it disturbing to be told that the copy was a copy.
Disturbing how ? Wouldn’t you automatically dismiss the person who tells you this as a crazy person ? If not, why not ?
But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Er… ok, that’s good to know. edges away slowly
Personally, if I encountered some beings who appeared to be sentient, I’d find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it’s possible that they’re not really sentient, but why risk it, when the probability of this being the case is so low ?
You’re right. It is impossible to determine that the current copy is the original or not.
“Disturbing how?”
Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
“edges away slowly”
lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I’m honest enough to say that maximizing my own personal utility function is not in the best interests of humanity.
Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer’s CEV paper so I require further input.
“difficult to force them to do my bidding”.
I don’t know if you enjoy video games or not. Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it.
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you’d been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ?
Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function.
Whoever that Phil guy is, I’m going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight.
Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons.
I haven’t played that particular shooter, but I am reasonably certain that these NPCs wouldn’t come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test.
Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
I’m talking exactly about a process that is so flawless you can’t tell the difference.
Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn’t make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
That said, I’d be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn’t be the original me at the end of the total replacement process it would still be the hybrid “me”.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren’t one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we’re not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
I’m talking exactly about a process that is so flawless you can’t tell the difference. Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there—and you have a perfectly fine explanation for the presence of a dead body.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
I don’t think anyone was arguing that the AI needed to be conscious—intelligence and consciousness are orthogonal.
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
In order to make them the same person you’d need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they’d be part of the same hybrid entity. But at no point are they the same person.
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
I don’t know why it matters which is the original—the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other—but they’re both still Person X.
It matters to you if you’re the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I’m not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct—though willing to concede if there is ultimately some logical way to prove they are the same person.)
The reason is obvious. If they are the same person and one of them kills someone are both of them guilty?
If one fathers a child, is the child the offspring of both of them?
Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can’t close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?
I agree with TheOtherDave. If you imagine that we scan someone’s brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn’t matter if we turn one of those simulations off. Nobody’s died. What I’m saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states.
I’m not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn’t do it isn’t the same person as the one who did—they didn’t actually experience committing the murder or whatever. On the other hand, they’re also someone who would have done it in the same circumstances—so they’re dangerous. I don’t know.
it doesn’t matter if we turn one of those simulations off. Nobody’s died.
You are decreasing the amount of that person that exists.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Well, maybe. But there is a whole universe full of people who will never speak to you again and are left to grieve over your body.
You are decreasing the amount of that person that exists.
Yes, there is a measure of that person’s existence (number of perfect copies) which I’m reducing by deleting a perfect copy of that person. What I’m saying is precisely that I don’t care, because that is not a measure of people I value.
Similarly, if I gain 10 pounds, there’s a measure of my existence (mass) which I thereby increase. I don’t care, because that’s not a measure of people I value.
Neither of those statements is quite true, admittedly. For example, I care about gaining 10 pounds because of knock-on effects—health, vanity, comfort, etc. I care about gaining an identical backup because of knock-on effects—reduced risk of my total destruction, for example. Similarly, I care about gaining a million dollars, I care about gaining the ability to fly, there’s all kinds of things that I care about. But I assume that your point here is not that identical copies are valuable in some sense, but that they are valuable in some special sense, and I just don’t see it.
As far as MWI goes, yes… if you posit a version of many-worlds where the various branches are identical, then I don’t care if you delete half of those identical branches. I do care if you delete me from half of them, because that causes my loved ones in those branches to suffer… or half-suffer, if you like. Also, because the fact that those branches have suddenly become non-identical (since I’m in some and not the others) makes me question the premise that they are identical branches.
You are decreasing the amount of that person that exists.
And this “amount” is measured by the number of simulations? What if one simulation is using double the amount of atoms (e.g. by having thicker transistors), does it count twice as much? What if one simulation double checks each result, and another does not, does it count as two?
All that changes is the amplitude of your existence.
The equivalence between copies spreads across the many-worlds and identical simulations running in the same world, is yet to be proven or disproven—and I expect it won’t be proven or disproven until we have some better understanding about the hard problem of consciousness.
Can’t speak for APMason, but I say it because what matters to me is the information.
If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it’s the same person. If a person doesn’t contain any unique information, whether they live or die doesn’t matter nearly as much to me as if they do.
And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn’t make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn’t mean I’m a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn’t mean I’m the same person I was.
“If the information is different, and the information constitutes people, then it constitutes different people.”
True and therein lies the problem. Let’s do two comparisons:
You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let’s compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.
Using your argument that it’s the information content that’s important, they don’t really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.
Basically what you’re talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.
I’m thus uncomfortable with killing one of them and then saying the person still exists.
So, what you value is the information lost during the copy process? That is, we’ve been saying “a perfect copy,” but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren’t good enough?
Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
“Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?”
OK I’ve mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it’s actually me that’s alive if you plan to kill me. Because we’re basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we’re doing something equivalent to a single copy walking through a gate.
I don’t believe that just the information by itself is enough to answer the question “Is it the original me?” in affirmative. And given that it’s not even all of the information (though is all of the information on the macro scale) I know for a fact we’re doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they’re not exactly equivalent once you go down to the quantum level I just can’t buy into it though things would be murkier if the quantum states were provably identical.
Here’s what I’ve understood; let me know if I’ve misunderstood anything.
Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value… call that “something” X for convenience.
If P’ is a duplicate of P, then P’ does not possess X, or at least cannot be demonstrated to possess X.
This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X.
X is preserved for P from one moment/day/year to the next, even though P’s information content—at a macroscopic level, let alone a quantum one—changes over time. I conclude that X does not depend on P’s information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring.
By similar reasoning, I conclude that X doesn’t depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels.
I don’t have any idea of what that X might actually be; since we’ve eliminated from consideration everything about people I’m aware of.
I’m still interested in more details about X, beyond the definitional attribute of “X is that thing P has that P’ doesn’t”, but I no longer believe I can elicit those details through further discussion.
EDIT: Yes, you did understand though I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it’s an axiomatic difference Dave.
It appears from my side of the table that you’re starting from the axiom that all that’s important is information and that originality and/or physical existence including information means nothing.
And you’re dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different—though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it’s impossible to dismiss the question “Am I dying when I do this?” because your are making a lossy copy even from your standpoint. The only get-out clause is to say “it’s a close enough copy because the quantum states and position are irrelevant because we can’t measure the difference between atoms in two identical copies on the macro scale other than saying we’ve now got 2X the same atoms whereas before we had 1X).
It’s exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a.
If the daughter bacteria were an exact copy of the information content of the original bacteria then you’d have to say from your position that it’s the same bacteria and the original is not dead right? Or maybe you’d say that it doesn’t matter that the original died.
My response to that argument (if it were the line of reasoning you took—is it?) would be that “it matters volitionally—if the original didn’t want to die and it was forced to bud then it’s been killed).
I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.
The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)
But I did say that identity of quantum states is a red herring.
As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren’t identical. If you believe that X can remain unchanged while Y changes, then you don’t believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don’t believe that identity depends on quantum states.
To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren’t me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don’t see any particular reason to avoid having the new person appear instantaneously in my mom’s house, rather than having it appear in an airplane seat an incremental distance closer to my mom’s house.
Other stuff:
Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.
I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.
I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)
I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)
A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.
“Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.”
OK good to know. I’ll have other questions but I need to mull it over.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
I agree with this but I don’t think it supports your line of reasoning. I’ll explain why after my meeting this afternoon.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)”
Interesting. I have a contrary line of argument which I’ll explain this afternoon.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)”
Disagree. Again I’ll explain why later.
“A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.”
Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies?
I don’t know.
What I can say is that our differences in opinion here would make a superb science fiction story.
There’s a lot of decent SF on this theme. If you haven’t read John Varley’s Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. “Steel Beach” isn’t a bad place to start.
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn’t really touch much on our points of contention as such. In fact I’d say it steered clear from them since there wasn’t really the concept of uploads etc. Interestingly, I haven’t read anything that really examines closely whether the copied upload really is you. Anyways.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead,
even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
OK I have to say that now I’ve thought it through I think this is a straw man argument that “you’re not the same as you were yesterday” used as a pretext for saying that you’re exactly the same from one moment to the next. It is missing the point entirely.
Although you are legally the same person, it’s true that you’re not exactly physically the same person today as you were yesterday and it’s also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.
That this is true in no way negates the main point: human physical existence at any one point in time does
have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.
Building a copy of yourself and then destroying the original has no continuity. It’s directly analgous to budding
asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.
That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it’s patently not the same as the process you, I and everyone else goes through on a day to day basis. It’s a new thing. (Although it’s already been tried in nature as the asexual budding process of bacteria).
I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them
what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I’ll get to that later.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with.
I would also say that the person survived, which I think you would not agree with.)”
That’s directly analogous to multi worlds interpretation of quantum physics which has multiple timelines.
You could argue from that perspective that death is irrelevant because in an infintude of possibilities
if one of your instances die then you go on existing.
Fine, but it’s not me. I’m mortal and always will be even if some virtual copy of me might not be.
So you guessed correctly, unless we’re using some different definition of “person” (which is likely I think)
then the person did not survive.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not.
It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival.
(For example, if they want to die, then respecting their volition is opposed to their survival.)”
Volition has everything to do with it.
While it’s true that volition is independent of whether they have died or not (agreed),
the reason it’s important is that some people will likely take your position to justify forced
destructive scanning at some point because it’s “less wasteful of resources” or some other pretext.
It’s also particularly important in the case of an AI over which humanity would have no control.
If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it’s purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.
Here’s a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
So here’s a scenario for you given that you think information is the only important thing:
Do you consider a person who has lost much of their memory to be the same person?
What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it’s someone else’s memories: did they just die?
Here’s yet another scenario. I wonder if you have though about this one:
Scan a person destructively (with their permission).
Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of
them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone.
Ask yourself this question: How many deaths have taken place here?
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.
I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don’t continue living in the second case.
I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.
I agree that if a person is being offered a choice, it is important for that person to understand the choice. I’m perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I’m perfectly content not to describe the latter process as the continuation of an existing life.
I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.
Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
Yes. As I say, I endorse respecting individuals’ stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)
Do you consider a person who has lost much of their memory to be the same person?
It depends on what ‘much of’ means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I’ve left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it’s hard to say where that point is.
What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top?
I am content to say I’m the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I’m content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don’t think there’s any real fact of the matter here; it’s not wrong to say that I’m a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.
What if it’s someone else’s memories: did they just die?
If I am replaced by a different person’s memories and patterns of interaction, I cease to exist.
Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone. How many deaths have taken place here?
Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.
Incidentally, I think you’re being a lot more adversarial here than this discussion actually calls for.
Very Good response. I can’t think of anything to disagree with and I don’t think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here’s a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not):
Let’s say you could slowly transfer a person into an upload by the following method:
You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.
Am I dead? Yes but not all of me is and we’re now left with a hybrid being. It’s not completely me, but I’ve not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn’t me).
Gradually over a process of time (let’s say years rather than days or minutes or seconds) all of the parts of the brain are replaced.
At the end of it I’m still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.
Now I know the position you’d take is that speeding that process up is mathematically equivalent.
It isn’t from my perspective. I’m dead instantly and I don’t get the chance to transition my existence in a meaningful way to me.
Sidetracking a little:
I suspect you were comparing your unknown quantity X to some kind of “soul”. I don’t believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical “spirit being” is exactly identical to deconstructing me—killing me—and making a copy even if I took the position that the reconstructed being on “the last day” was me. Which I don’t. As soon as I die that’s me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).
You’re basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of “isn’t it the information that’s important?”
Not exactly.
I’m concerned that I will die and I’m examining the hyptheses as to why it’s not me that dies. Best as I can come up with the response is “you will die but it doesn’t matter because there’s another identical (or close as possible) copy still around.
As to what you value that I don’t I don’t have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?
I’m not asking why you should value yourself over an exact copy, I’m asking why you do. I’m asking you (over and over) what you value. Which is a different question from why you value whatever that is.
I’ve told you what I value, in this context. I don’t know why I value it, particularly… I could tell various narratives, but I’m not sure I endorse any of them.
As to what you value that I don’t I don’t have an answer.
Is that a typo? What I’ve been trying to elicit is what xxd values here that TheOtherDave doesn’t, not the other way around. But evidently I’ve failed at that… ah well.
Thanks Dave. This has been a very interesting discussion and although I think we can’t close the gap on our positions I’ve really enjoyed it.
To answer your question “what do I value”? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version “but your information is still around” and my response is “but it’s not me” and your response is “how is it not you?”
I don’t know.
“What is it I value that you don’t?” I don’t know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can’t put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don’t really have an answer for that.
I’m not sure I care.
For example if I had my evil way and I went FOOM then part of my optimization process would involve mind control and somewhat deviant roleplay with certain porno actresses. Would I want those actresses to be controlled against their will? Probably not. But at the same time it would be good enough if they were able to simulate being the actresses in a way that I could not tell the difference between the original and the simulated.
You wouldn’t prefer to forego the deviant roleplay for the sake of, y’know, not being evil?
But that’s not the point, I suppose. It sounds like you’d take the Experience Machine offer. I don’t really know what to say to that except that it seems like a whacky utility function.
How is the deviant roleplay being evil if the participants are not being coerced or are catgirls? And if it’s not being evil then how would I be defined as evil just because I (sometimes—not always) like deviant roleplay?
That’s the cruz of my point. I don’t reckon that optimizing humanity’s utility function is the opposite of unfriendly AI (or any individual’s for that matter) and I furthermore reckon that trying to seek that goal is much, much harder than trying to create an AI that at a minimum won’t kill us all AND might trade with us if it wants to.
Oh, sorry, I interpreted the comment incorrectly—for some reason I assumed your plan was to replace the actual porn actresses with compliant simulations. I wasn’t saying the deviancy itself was evil. Remember that the AI doesn’t need to negotiate with you—it’s superintelligent and you’re not. And while creating an AI that just ignores us but still optimises other things, well, it’s possible, but I don’t think it would be easier than creating FAI, and it would be pretty pointless—we want the AI to do something, after all.
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it’s by definition evil if I coerce the catgirls by mind control.
I suppose logically I can’t have my cake and eat it since I wouldn’t want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I’ve already said that elsewhere that I wouldn’t trust my own function to be the optimal.
I doubt, however, that we’d easily find a candidate function from a single individual for similar reasons.
I think we’ve slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told—which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved.
But yes! Of course I want the AGI to do something. If it doesn’t do anything, it’s not an AI. It’s not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind.
Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
Newsflash the human body is a machine too! I’m being deliberately antagonist here, it’s so obvious that a human (body and mind are the same thing) is a machine, that it’s irrelevant to even mention it.
So… hm.
So if I’m parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways:
it doesn’t have a soul, and consequently is denied the possibility of heaven,
it doesn’t have a sense of smell, taste, hearing, sight, or touch,
it doesn’t have my hands, or perhaps hands at all,
it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn’t be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I’m not exactly sure what your reasons for believing all of that are… they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
Mostly.
More like “it doesn’t have a soul, therefore there’s nothing to send to heaven”.
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don’t expect them to feel. If I asked a philosopher what it’s like for a bat to be a bat, they’d understand the allusion I’d like to make here, but that’s awfully jargony. Here’s an explanation of the concept I’m trying to convey.
I don’t know whether that’s something you’ve overlooked or whether I’m asking a wrong question.
If it helps, I’ve read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on “What is it like to bat a bee?” in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we’re having.
I mean, I’m willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I’m happy to posit that there’s some kind of constraint—label it X for now—such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it’s possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that’s fine. It’s just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn’t matter very much.
That’s why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.
“IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.”
Does it matter?
What if we can run some bunch of algorithms on a computer that pass the turing test but are provably non-sentient? When it comes down to it we’re looking for something that can solve generalized problems willingly and won’t deliberately try to kill us.
It’s like the argument against catgirls. Some people would prefer to have human girls/boys but trust me sometimes a catgirl/boy would be better.
It matters for two things:
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
Nice thought experiment.
No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.
Regardless of whether it’s sentient or not provably so.
You make sense. I’m starting to think a computer could potentially be sentient. Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
I personally believe that humans are likewise machines, generally made of meat, that run “programs”. I put the word “programs” in scare-quotes because our programs are very different in structure from computer programs, though the basic concept is the same.
What we have in common with computers, though, is that our programs are self-modifying. We can learn, and thus change our own code. Thus, I see no categorical difference between humans and computers, though obviously our current computers are far inferior to humans in many (though not all) areas.
That’s a perfectly workable model of a computer for our purposes, though if we were really going to get into this we’d have to further explore what a circuit is.
Personally, I’ve pretty much given up on the word “sentient”… in my experience it connotes far more than it denotes, such that discussions that involve it end up quickly reaching the point where nobody quite knows what they’re talking about, or what talking about it entails. I have the same problem with “qualia” and “soul.” (Then again, I talk comfortably about something being or not being a person, which is just as problematic, so it’s not like I’m consistent about this.)
But that aside, yeah, if any physical thing can be sentient, then I don’t see any principled reason why a computer can’t be. And if I can be implemented in a physical thing at all, then I don’t see any principled reason why I can’t be implemented in a computer.
Also (getting back to an earlier concern you expressed), if I can be implemented in a physical thing, I don’t see any principled reason why I can’t be implemented in two different physical things at the same time.
I agree Dave. Also I’ll go further. For my own personal purposes I care not a whit if a powerful piece of software passes the Turing test, can do cool stuff, won’t kill me but it’s basically an automaton.
I would go one step further, and claim that if a piece of software passes the general Turing test—i.e., if it acts exactly like a human would act in its place—then it is not an automaton.
… over some sufficiently broad set of places.
Heh, yes, good point.
And I’d say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I’d say no, but the point is irrelevant.
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn’t you want to know whether you actually got your own car back ?
I have to admit, your response kind of mystified me, so now I’m intrigued.
Very good questions.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
In the case of the simulated porn actress, I wouldn’t really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.
That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I’d be evil in the first place because I’d be coercing her to do evil stuff in my personal simulation but I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck” so we’re back to the beginning again and THUS I say “it’s irrelevant”.
My primary concern in a situation like this is that she’d be kidnapped and presumably extremely not happy about that.
If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that’s just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn’t particularly bother me. (I suppose this sounds bold, but I’m almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their “deaths” would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
The closest analogy I can think of is if I lived in a culture where families only had one child each, and was suddenly introduced to brothers. It would be strange to find two people who shared parents, a childhood environment, and so forth—attributes I was accustomed to treating as uniquely associated with a person, but it turned out I was wrong to do so. It would be disconcerting, but I expect I’d get used to it.
If you count a fertilized egg as a person, then two identical twins did use to be the same person. :-)
And chimeras used to be two different people.
While I don’t doubt that many people would be OK with this I wouldn’t because of the lack of certainty and provability.
My difficulty with this concept goes further. Since it’s not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
“Oh but the copies running in a simulation are the same thing as the originals really”, protests the AI after all the humans have been destructively scanned and copied into a simulation...
That shouldn’t happen as long as the AI is friendly—it doesn’t want to destroy people.
But is it destroying people if the simulations are the same as the original?
There are a few interesting possibilities here:
1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is.
But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here: Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary. I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
Yes that’s right.
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I thought I had answered but perhaps I answered what I read into it.
If you are asking “will I prevent you from gradually moving everything to digital perhaps including yourselves” then the answer is no.
I just wanted to clarify that we were talking about with consent vs without consent.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
What do you value?
More particularly, you regularly change out your atoms.
That turns out to be true, but I suspect everything I say above would be just as true if I kept the same set of atoms in perpetuity.
I agree that it would still be true, but our existence would be less strong an example of it.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
OK.
Hmm
I find “if it walks like a duck and talks like a duck” to be a really good way of identifying ducks.
Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
That’s a point of philosophical disagreement between us. Here’s why:
Take an individual.
Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.
You create a clone of that person.
Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.
Now let’s say technology exists to transfer memories and mind states.
After you create the clone-that-is-not-you you then put your memories into it.
If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
Here’s why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
The uniqueness of experience is important.
this follows: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral. Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back—wouldn’t you be ?
If the copy was so perfect that you couldn’t tell that it wasn’t your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ?
I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
Really good discussion.
Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it’s ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible.
Disturbing how ? Wouldn’t you automatically dismiss the person who tells you this as a crazy person ? If not, why not ?
Er… ok, that’s good to know. edges away slowly
Personally, if I encountered some beings who appeared to be sentient, I’d find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it’s possible that they’re not really sentient, but why risk it, when the probability of this being the case is so low ?
You’re right. It is impossible to determine that the current copy is the original or not.
“Disturbing how?” Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
“edges away slowly” lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I’m honest enough to say that maximizing my own personal utility function is not in the best interests of humanity. Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer’s CEV paper so I require further input.
“difficult to force them to do my bidding”.
I don’t know if you enjoy video games or not. Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you’d been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ?
Whoever that Phil guy is, I’m going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight.
I haven’t played that particular shooter, but I am reasonably certain that these NPCs wouldn’t come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test.
I would say that, most likely, yes, it is murder.
I’m talking exactly about a process that is so flawless you can’t tell the difference. Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn’t make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
That said, I’d be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn’t be the original me at the end of the total replacement process it would still be the hybrid “me”.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren’t one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we’re not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
Which one is which ? And why ?
OK, cool… I understand you, then.
Can you clarify what, if anything, is uniquely valuable about a person who is an exact copy of another person?
Or is this a case where we have two different people, neither of whom have any unique value?
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there—and you have a perfectly fine explanation for the presence of a dead body.
There is no such thing as “the same atoms”—atoms do not have individual identities.
I don’t think anyone was arguing that the AI needed to be conscious—intelligence and consciousness are orthogonal.
K here’s where we disagree:
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
In order to make them the same person you’d need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they’d be part of the same hybrid entity. But at no point are they the same person.
I don’t know why it matters which is the original—the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other—but they’re both still Person X.
It matters to you if you’re the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I’m not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct—though willing to concede if there is ultimately some logical way to prove they are the same person.)
The reason is obvious. If they are the same person and one of them kills someone are both of them guilty? If one fathers a child, is the child the offspring of both of them?
Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can’t close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?
I agree with TheOtherDave. If you imagine that we scan someone’s brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn’t matter if we turn one of those simulations off. Nobody’s died. What I’m saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states. I’m not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn’t do it isn’t the same person as the one who did—they didn’t actually experience committing the murder or whatever. On the other hand, they’re also someone who would have done it in the same circumstances—so they’re dangerous. I don’t know.
You are decreasing the amount of that person that exists.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Well, maybe. But there is a whole universe full of people who will never speak to you again and are left to grieve over your body.
Good point.
There is of course a difference between death and non-existence.
Yes, there is a measure of that person’s existence (number of perfect copies) which I’m reducing by deleting a perfect copy of that person. What I’m saying is precisely that I don’t care, because that is not a measure of people I value.
Similarly, if I gain 10 pounds, there’s a measure of my existence (mass) which I thereby increase. I don’t care, because that’s not a measure of people I value.
Neither of those statements is quite true, admittedly. For example, I care about gaining 10 pounds because of knock-on effects—health, vanity, comfort, etc. I care about gaining an identical backup because of knock-on effects—reduced risk of my total destruction, for example. Similarly, I care about gaining a million dollars, I care about gaining the ability to fly, there’s all kinds of things that I care about. But I assume that your point here is not that identical copies are valuable in some sense, but that they are valuable in some special sense, and I just don’t see it.
As far as MWI goes, yes… if you posit a version of many-worlds where the various branches are identical, then I don’t care if you delete half of those identical branches. I do care if you delete me from half of them, because that causes my loved ones in those branches to suffer… or half-suffer, if you like. Also, because the fact that those branches have suddenly become non-identical (since I’m in some and not the others) makes me question the premise that they are identical branches.
And this “amount” is measured by the number of simulations? What if one simulation is using double the amount of atoms (e.g. by having thicker transistors), does it count twice as much? What if one simulation double checks each result, and another does not, does it count as two?
The equivalence between copies spreads across the many-worlds and identical simulations running in the same world, is yet to be proven or disproven—and I expect it won’t be proven or disproven until we have some better understanding about the hard problem of consciousness.
Can’t speak for APMason, but I say it because what matters to me is the information.
If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it’s the same person. If a person doesn’t contain any unique information, whether they live or die doesn’t matter nearly as much to me as if they do.
And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn’t make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn’t mean I’m a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn’t mean I’m the same person I was.
“If the information is different, and the information constitutes people, then it constitutes different people.”
True and therein lies the problem. Let’s do two comparisons: You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let’s compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.
Using your argument that it’s the information content that’s important, they don’t really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.
Basically what you’re talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.
I’m thus uncomfortable with killing one of them and then saying the person still exists.
So, what you value is the information lost during the copy process? That is, we’ve been saying “a perfect copy,” but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren’t good enough?
Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
“Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?”
OK I’ve mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it’s actually me that’s alive if you plan to kill me. Because we’re basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we’re doing something equivalent to a single copy walking through a gate.
I don’t believe that just the information by itself is enough to answer the question “Is it the original me?” in affirmative. And given that it’s not even all of the information (though is all of the information on the macro scale) I know for a fact we’re doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they’re not exactly equivalent once you go down to the quantum level I just can’t buy into it though things would be murkier if the quantum states were provably identical.
Does that answer your question?
Maybe?
Here’s what I’ve understood; let me know if I’ve misunderstood anything.
Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value… call that “something” X for convenience.
If P’ is a duplicate of P, then P’ does not possess X, or at least cannot be demonstrated to possess X.
This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X.
X is preserved for P from one moment/day/year to the next, even though P’s information content—at a macroscopic level, let alone a quantum one—changes over time. I conclude that X does not depend on P’s information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring.
By similar reasoning, I conclude that X doesn’t depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels.
I don’t have any idea of what that X might actually be; since we’ve eliminated from consideration everything about people I’m aware of.
I’m still interested in more details about X, beyond the definitional attribute of “X is that thing P has that P’ doesn’t”, but I no longer believe I can elicit those details through further discussion.
EDIT: Yes, you did understand though I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it’s an axiomatic difference Dave.
It appears from my side of the table that you’re starting from the axiom that all that’s important is information and that originality and/or physical existence including information means nothing.
And you’re dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different—though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it’s impossible to dismiss the question “Am I dying when I do this?” because your are making a lossy copy even from your standpoint. The only get-out clause is to say “it’s a close enough copy because the quantum states and position are irrelevant because we can’t measure the difference between atoms in two identical copies on the macro scale other than saying we’ve now got 2X the same atoms whereas before we had 1X).
It’s exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a. If the daughter bacteria were an exact copy of the information content of the original bacteria then you’d have to say from your position that it’s the same bacteria and the original is not dead right? Or maybe you’d say that it doesn’t matter that the original died.
My response to that argument (if it were the line of reasoning you took—is it?) would be that “it matters volitionally—if the original didn’t want to die and it was forced to bud then it’s been killed).
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.
The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)
But I did say that identity of quantum states is a red herring.
As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren’t identical. If you believe that X can remain unchanged while Y changes, then you don’t believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don’t believe that identity depends on quantum states.
To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren’t me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don’t see any particular reason to avoid having the new person appear instantaneously in my mom’s house, rather than having it appear in an airplane seat an incremental distance closer to my mom’s house.
Other stuff:
Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.
I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.
I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)
I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)
A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.
Other stuff:
“Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.”
OK good to know. I’ll have other questions but I need to mull it over.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.” I agree with this but I don’t think it supports your line of reasoning. I’ll explain why after my meeting this afternoon.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)” Interesting. I have a contrary line of argument which I’ll explain this afternoon.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)” Disagree. Again I’ll explain why later.
“A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.” Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies? I don’t know.
What I can say is that our differences in opinion here would make a superb science fiction story.
There’s a lot of decent SF on this theme. If you haven’t read John Varley’s Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. “Steel Beach” isn’t a bad place to start.
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn’t really touch much on our points of contention as such. In fact I’d say it steered clear from them since there wasn’t really the concept of uploads etc. Interestingly, I haven’t read anything that really examines closely whether the copied upload really is you. Anyways.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
OK I have to say that now I’ve thought it through I think this is a straw man argument that “you’re not the same as you were yesterday” used as a pretext for saying that you’re exactly the same from one moment to the next. It is missing the point entirely.
Although you are legally the same person, it’s true that you’re not exactly physically the same person today as you were yesterday and it’s also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.
That this is true in no way negates the main point: human physical existence at any one point in time does have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.
Building a copy of yourself and then destroying the original has no continuity. It’s directly analgous to budding asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.
That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it’s patently not the same as the process you, I and everyone else goes through on a day to day basis. It’s a new thing. (Although it’s already been tried in nature as the asexual budding process of bacteria).
I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I’ll get to that later.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)”
That’s directly analogous to multi worlds interpretation of quantum physics which has multiple timelines. You could argue from that perspective that death is irrelevant because in an infintude of possibilities if one of your instances die then you go on existing. Fine, but it’s not me. I’m mortal and always will be even if some virtual copy of me might not be. So you guessed correctly, unless we’re using some different definition of “person” (which is likely I think) then the person did not survive.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)”
Volition has everything to do with it. While it’s true that volition is independent of whether they have died or not (agreed), the reason it’s important is that some people will likely take your position to justify forced destructive scanning at some point because it’s “less wasteful of resources” or some other pretext.
It’s also particularly important in the case of an AI over which humanity would have no control. If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it’s purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.
Here’s a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
So here’s a scenario for you given that you think information is the only important thing: Do you consider a person who has lost much of their memory to be the same person? What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it’s someone else’s memories: did they just die?
Here’s yet another scenario. I wonder if you have though about this one: Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone.
Ask yourself this question: How many deaths have taken place here?
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.
I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don’t continue living in the second case.
I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.
I agree that if a person is being offered a choice, it is important for that person to understand the choice. I’m perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I’m perfectly content not to describe the latter process as the continuation of an existing life.
I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.
Yes. As I say, I endorse respecting individuals’ stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)
It depends on what ‘much of’ means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I’ve left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it’s hard to say where that point is.
I am content to say I’m the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I’m content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don’t think there’s any real fact of the matter here; it’s not wrong to say that I’m a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.
If I am replaced by a different person’s memories and patterns of interaction, I cease to exist.
Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.
Incidentally, I think you’re being a lot more adversarial here than this discussion actually calls for.
Very Good response. I can’t think of anything to disagree with and I don’t think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
Thanks for the discussion.
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here’s a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let’s say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.
Am I dead? Yes but not all of me is and we’re now left with a hybrid being. It’s not completely me, but I’ve not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn’t me).
Gradually over a process of time (let’s say years rather than days or minutes or seconds) all of the parts of the brain are replaced.
At the end of it I’m still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.
Now I know the position you’d take is that speeding that process up is mathematically equivalent.
It isn’t from my perspective. I’m dead instantly and I don’t get the chance to transition my existence in a meaningful way to me.
Sidetracking a little: I suspect you were comparing your unknown quantity X to some kind of “soul”. I don’t believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical “spirit being” is exactly identical to deconstructing me—killing me—and making a copy even if I took the position that the reconstructed being on “the last day” was me. Which I don’t. As soon as I die that’s me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).
You’re basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of “isn’t it the information that’s important?”
Not exactly.
I’m concerned that I will die and I’m examining the hyptheses as to why it’s not me that dies. Best as I can come up with the response is “you will die but it doesn’t matter because there’s another identical (or close as possible) copy still around.
As to what you value that I don’t I don’t have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?
I’m not asking why you should value yourself over an exact copy, I’m asking why you do. I’m asking you (over and over) what you value. Which is a different question from why you value whatever that is.
I’ve told you what I value, in this context. I don’t know why I value it, particularly… I could tell various narratives, but I’m not sure I endorse any of them.
Is that a typo? What I’ve been trying to elicit is what xxd values here that TheOtherDave doesn’t, not the other way around. But evidently I’ve failed at that… ah well.
Thanks Dave. This has been a very interesting discussion and although I think we can’t close the gap on our positions I’ve really enjoyed it.
To answer your question “what do I value”? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version “but your information is still around” and my response is “but it’s not me” and your response is “how is it not you?”
I don’t know.
“What is it I value that you don’t?” I don’t know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can’t put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don’t really have an answer for that.
But you want the things you think are people to really be people, right?
I’m not sure I care. For example if I had my evil way and I went FOOM then part of my optimization process would involve mind control and somewhat deviant roleplay with certain porno actresses. Would I want those actresses to be controlled against their will? Probably not. But at the same time it would be good enough if they were able to simulate being the actresses in a way that I could not tell the difference between the original and the simulated.
Others may have different opinions.
You wouldn’t prefer to forego the deviant roleplay for the sake of, y’know, not being evil?
But that’s not the point, I suppose. It sounds like you’d take the Experience Machine offer. I don’t really know what to say to that except that it seems like a whacky utility function.
How is the deviant roleplay being evil if the participants are not being coerced or are catgirls? And if it’s not being evil then how would I be defined as evil just because I (sometimes—not always) like deviant roleplay?
That’s the cruz of my point. I don’t reckon that optimizing humanity’s utility function is the opposite of unfriendly AI (or any individual’s for that matter) and I furthermore reckon that trying to seek that goal is much, much harder than trying to create an AI that at a minimum won’t kill us all AND might trade with us if it wants to.
Oh, sorry, I interpreted the comment incorrectly—for some reason I assumed your plan was to replace the actual porn actresses with compliant simulations. I wasn’t saying the deviancy itself was evil. Remember that the AI doesn’t need to negotiate with you—it’s superintelligent and you’re not. And while creating an AI that just ignores us but still optimises other things, well, it’s possible, but I don’t think it would be easier than creating FAI, and it would be pretty pointless—we want the AI to do something, after all.
A-Ha!
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it’s by definition evil if I coerce the catgirls by mind control. I suppose logically I can’t have my cake and eat it since I wouldn’t want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I’ve already said that elsewhere that I wouldn’t trust my own function to be the optimal.
I doubt, however, that we’d easily find a candidate function from a single individual for similar reasons.
I think we’ve slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told—which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved.
But yes! Of course I want the AGI to do something. If it doesn’t do anything, it’s not an AI. It’s not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
Technically, it’s still an AI, it’s just a really useless one.
Exactly.
So “friendly” is therefore a conflation of NOT(unfriendly) AND useful rather than just simply NOT(unfriendly) which is easier.
Off. Do I win?
You’re determined to make me say LOL so you can downvote me right?
EDIT: Yes you win. OFF.
Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Isn’t doing anything for us…
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
OK give me time to digest the jargon.
Newsflash the human body is a machine too! I’m being deliberately antagonist here, it’s so obvious that a human (body and mind are the same thing) is a machine, that it’s irrelevant to even mention it.
Song
lyrics
story
article—really much more a discussion than a lesson.