1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is. But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here:
Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary.
I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
There are a few interesting possibilities here:
1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is.
But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here: Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary. I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
Yes that’s right.
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I thought I had answered but perhaps I answered what I read into it.
If you are asking “will I prevent you from gradually moving everything to digital perhaps including yourselves” then the answer is no.
I just wanted to clarify that we were talking about with consent vs without consent.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
What do you value?
More particularly, you regularly change out your atoms.
That turns out to be true, but I suspect everything I say above would be just as true if I kept the same set of atoms in perpetuity.
I agree that it would still be true, but our existence would be less strong an example of it.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
OK.
Hmm