We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow’s pyramid accurately models human psychology (a point of which I have my doubts), I don’t think the majority of people you’re likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security—you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you’ve experienced domestic violence, or fought in a war) but in the long run they’re not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.
If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don’t see any reason why we shouldn’t extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I’m not sure how significant I want to call sleep) meeting immediate physical needs, but those don’t factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn’t a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can’t eat or excrete normally because of one medical condition or another, but I don’t see them as proportionally less human.
I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it’s so far removed from its ultimate goal that I don’t feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don’t think about the low-level logistics that much; it’s not my job. And I’m a financially independent adult; I’d expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is. If Dave!upload doesn’t include X1, and X2 defines Dave!original’s humanity, and X1 contains X2, then Dave!upload isn’t human… more or less tautologically.
We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both… or, more generally, whether anything does. I’m not nearly as confident as you sound about either of those things.
But I’m not sure that matters.
Let’s posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it’s hunger, thirst, fatigue, etc. as you suggest. Maybe it’s curiosity. Maybe it’s boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don’t have words for.
Whatever it is, if it’s important to us that uploads be human, then we should design our uploads so that they have X2. Right?
But you seem to be taking it for granted that whatever X2 turns out to be, uploads won’t experience X2. Why?
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human. Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes.
I’m also asking, should we care? More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”.
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario?
Source code? Maybe not; it depends on whether that code is ever compiled. Object code? Yes, it makes a huge difference.
I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained? My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we’ve lost, and how much do I value what we’ve gained?
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow’s pyramid accurately models human psychology (a point of which I have my doubts), I don’t think the majority of people you’re likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security—you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you’ve experienced domestic violence, or fought in a war) but in the long run they’re not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.
If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don’t see any reason why we shouldn’t extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I’m not sure how significant I want to call sleep) meeting immediate physical needs, but those don’t factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn’t a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can’t eat or excrete normally because of one medical condition or another, but I don’t see them as proportionally less human.
I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it’s so far removed from its ultimate goal that I don’t feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don’t think about the low-level logistics that much; it’s not my job. And I’m a financially independent adult; I’d expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is.
If Dave!upload doesn’t include X1, and X2 defines Dave!original’s humanity, and X1 contains X2, then Dave!upload isn’t human… more or less tautologically.
We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both… or, more generally, whether anything does. I’m not nearly as confident as you sound about either of those things.
But I’m not sure that matters.
Let’s posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it’s hunger, thirst, fatigue, etc. as you suggest. Maybe it’s curiosity. Maybe it’s boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don’t have words for.
Whatever it is, if it’s important to us that uploads be human, then we should design our uploads so that they have X2. Right?
But you seem to be taking it for granted that whatever X2 turns out to be, uploads won’t experience X2.
Why?
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
Ah.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human.
Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
I’m also asking, should we care?
More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Source code? Maybe not; it depends on whether that code is ever compiled.
Object code? Yes, it makes a huge difference.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained?
My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
OK. Thanks for clarifying your position.
At the very lesat, by this point we’ve killed a lot of people. the fact that they’ve been backed up doesn’t make the murder less henious.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
Well, if nothing else happens our new computer substrate will stop working. But if we remove that problem—in what sense has this not already happened?
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.