Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human. Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes.
I’m also asking, should we care? More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”.
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario?
Source code? Maybe not; it depends on whether that code is ever compiled. Object code? Yes, it makes a huge difference.
I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained? My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we’ve lost, and how much do I value what we’ve gained?
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
Ah.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human.
Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
I’m also asking, should we care?
More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Source code? Maybe not; it depends on whether that code is ever compiled.
Object code? Yes, it makes a huge difference.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained?
My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
OK. Thanks for clarifying your position.
At the very lesat, by this point we’ve killed a lot of people. the fact that they’ve been backed up doesn’t make the murder less henious.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
Well, if nothing else happens our new computer substrate will stop working. But if we remove that problem—in what sense has this not already happened?
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.