I think if I became an upload (assuming it’s a high fidelity emulation) I’d still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don’t understand why you say “At that point you already lost humanity by definition”.
Someone who is currently altruistic towards humanity should
Wei, the question here is would rather than should, no? It’s quite possible that the altruism that I endorse as a part of me is related to my brain’s empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang’s “Understand”—http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen’s Dr. Manhattan.
Logical fallacy: Generalization from fictional evidence.
A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.
I agree on the first-minute point, but do not see why it’s relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I’d make value preservation my first order of business, but since an upload is still evolution’s spaghetti code it might be a race against time.
their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
Well, yes. I think that’s the point. I certainly don’t only value other humans for the way that they interest me—If that were so, I probably wouldn’t care about most of them at all. Humanity is a terminall value to me—or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.
How do you know that “the existence and experiences of happy, engaged, thinking sentient beings” is indeed one of your terminal values, and not an instrumental value ?
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow’s pyramid accurately models human psychology (a point of which I have my doubts), I don’t think the majority of people you’re likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security—you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you’ve experienced domestic violence, or fought in a war) but in the long run they’re not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.
If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don’t see any reason why we shouldn’t extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I’m not sure how significant I want to call sleep) meeting immediate physical needs, but those don’t factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn’t a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can’t eat or excrete normally because of one medical condition or another, but I don’t see them as proportionally less human.
I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it’s so far removed from its ultimate goal that I don’t feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don’t think about the low-level logistics that much; it’s not my job. And I’m a financially independent adult; I’d expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is. If Dave!upload doesn’t include X1, and X2 defines Dave!original’s humanity, and X1 contains X2, then Dave!upload isn’t human… more or less tautologically.
We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both… or, more generally, whether anything does. I’m not nearly as confident as you sound about either of those things.
But I’m not sure that matters.
Let’s posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it’s hunger, thirst, fatigue, etc. as you suggest. Maybe it’s curiosity. Maybe it’s boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don’t have words for.
Whatever it is, if it’s important to us that uploads be human, then we should design our uploads so that they have X2. Right?
But you seem to be taking it for granted that whatever X2 turns out to be, uploads won’t experience X2. Why?
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human. Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes.
I’m also asking, should we care? More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”.
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario?
Source code? Maybe not; it depends on whether that code is ever compiled. Object code? Yes, it makes a huge difference.
I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained? My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we’ve lost, and how much do I value what we’ve gained?
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.
Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ?
Now imagine that it’s not a party, but the entire world; and you can’t leave, because it’s everywhere. Would you still “feel altruistic toward humanity” at that point ?
It’s easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them).
I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can’t see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience.
OTOH, I don’t expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that’s kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of “doubled my intelligence” overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences.
If instead of “doubling” you “multiplied by 10″ I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
It’s easy to conflate uploads and augments, here...
Wait, why shouldn’t they be conflated ? Granted, an upload does not necessarily have to possess augmented intelligence, but IMO most if not all of them would obtain it in practice.
I can’t see why that experience would change upon a substrate change, such as uploading.
Agreed, though see above.
If you did some coherently specifiable thing that approximates the colloquial meaning of “doubled my intelligence” overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences. If instead of “doubling” you “multiplied by 10” I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
I agree completely; that was my point as well.
Edited to add:
I believe that, however incomprehensible one’s new values might be after augmentation, I am reasonably certain that they would not include “an altruistic attitude toward humanity” (as per our current understanding of the term). By analogy, I personally neither love nor hate individual insects; they are too far beneath me.
Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.
I agree completely; that was my point as well.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
By analogy, I personally neither love nor hate individual insects; they are too far beneath me.
Turning that analogy around.… I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
With respect to altruism and vast intelligence gulfs more generally… I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it’s only a mild preference; I don’t really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call “intelligence augmentation”, like short-term-memory buffer-size increases, that might well be modular in this way.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I’m concerned, since the potential search space is vast (if not infinite).
I suspect that if I remembered having been an insect and then later becoming a human being...
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.
Most people don’t remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn’s senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There’s not much there that is even worth remembering… and, IMO, there’s a good chance that a transhuman intelligence would feel the same way about its past humanity.
… and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
I agree with your later statement:
OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly.
To expand upon it a bit:
I agree with you regarding the pragmatic stance, but disagree about the “intrinsic value” part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people’s minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.
Like you, I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans, or infants.
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.
Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all “sufficiently augmented” human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.
More generally, when I say I expect my augmented self’s values to be incomprehensible to me, I actually mean it.
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday.
Mostly, I think that will depend on what kinds of augmentations we’re talking about. But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
My own intuition, FWIW, is that some such minds will remember their true origins, and others won’t, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.
There’s not much there that is even worth remembering.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why. I am perfectly happy to take your word for it that you don’t value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.
For my own part… well, my mom is not a particularly valuable person, as people go. There’s no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she’s my mom, and I value that about her.
My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they’re my memories of my infancy.
And then again, I might not. (Cf incomprehensible values of augments, above.)
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants. Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all.
Even if you don’t buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
Fair enough, though we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
My own intuition, FWIW, is that some such minds will remember their true origins...
It depends on what you mean by “remember”, I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it’s unclear what value they would put on such data, if any.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why.
Maybe it’s just me, but I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s—though, not being a biologist, I could be wrong about that. I’m sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you’ve seen one poorly-resolved bright blob, you’ve seen them all.
By analogy, I ate a banana for breakfast yesterday, but I don’t feel anything special about it. It was a regular banana from the store; once you’ve seen one, you’ve seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).
IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.)
we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
Sure, we could do that, which would give us an implicit notion of “vastly augmented intelligence” as something like naturally occurring geniuses (except on a much larger scale). I don’t think that’s terribly likely, but as I say, I’m happy to posit it for discussion if you like.
it’s unclear what value they would put on such data, if any. [...] I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s [...] IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
I agree that it’s unclear.
To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.
That’s correct; for some reason, I was thinking that you believed that a human’s preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
I think that childhood blankets and goldfish are different from an infant’s memories, but perhaps this is a topic for another time...
I think if I became an upload (assuming it’s a high fidelity emulation) I’d still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don’t understand why you say “At that point you already lost humanity by definition”.
Wei, the question here is would rather than should, no? It’s quite possible that the altruism that I endorse as a part of me is related to my brain’s empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang’s “Understand”—http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen’s Dr. Manhattan.
Logical fallacy: Generalization from fictional evidence.
A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.
I agree on the first-minute point, but do not see why it’s relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I’d make value preservation my first order of business, but since an upload is still evolution’s spaghetti code it might be a race against time.
Perhaps the idea is that the sensory experience of no longer falling into the category of “human” would cause the brain to behave in unexpected ways?
I don’t find that especially likely, mind, although I suppose long-term there might arise a self-serving “em supremacy” meme.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.
Well, yes. I think that’s the point. I certainly don’t only value other humans for the way that they interest me—If that were so, I probably wouldn’t care about most of them at all. Humanity is a terminall value to me—or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.
How do you know that “the existence and experiences of happy, engaged, thinking sentient beings” is indeed one of your terminal values, and not an instrumental value ?
+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow’s pyramid accurately models human psychology (a point of which I have my doubts), I don’t think the majority of people you’re likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security—you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you’ve experienced domestic violence, or fought in a war) but in the long run they’re not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.
If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don’t see any reason why we shouldn’t extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I’m not sure how significant I want to call sleep) meeting immediate physical needs, but those don’t factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn’t a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can’t eat or excrete normally because of one medical condition or another, but I don’t see them as proportionally less human.
I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it’s so far removed from its ultimate goal that I don’t feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don’t think about the low-level logistics that much; it’s not my job. And I’m a financially independent adult; I’d expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is.
If Dave!upload doesn’t include X1, and X2 defines Dave!original’s humanity, and X1 contains X2, then Dave!upload isn’t human… more or less tautologically.
We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both… or, more generally, whether anything does. I’m not nearly as confident as you sound about either of those things.
But I’m not sure that matters.
Let’s posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it’s hunger, thirst, fatigue, etc. as you suggest. Maybe it’s curiosity. Maybe it’s boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don’t have words for.
Whatever it is, if it’s important to us that uploads be human, then we should design our uploads so that they have X2. Right?
But you seem to be taking it for granted that whatever X2 turns out to be, uploads won’t experience X2.
Why?
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
Ah.
So when you say “most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent” you’re making a definitional claim that whatever the new agent experiences, it won’t be a human feeling, because (being software) the agent definitionally won’t be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won’t be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.
Yes? Do I understand you now?
FWIW, I agree that there are definitions of “human being” and “software” by which a piece of software is definitionally not a human being, though I don’t think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I’m willing to use your definitions when talking to you.
You go on to say that this agent, not being human, will not want the same things as a human.
Well, OK; that follows from your definitions.
One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?
Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn’t reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?
Because if it’s important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
I’m also asking, should we care?
More generally, I’m asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?
Fair enough. I’ve already agreed that this is true for the definitions you’ve chosen, so if that’s really all you’re talking about, then I guess there’s nothing more to say. As I said before, I don’t think those are useful definitions, and I don’t use them myself.
Source code? Maybe not; it depends on whether that code is ever compiled.
Object code? Yes, it makes a huge difference.
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we’ve lost, and how much do I value what we’ve gained?
My answer depends on the specifics of the simulation, and is based on what I value about humanity.
The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.
We face these questions every day; they aren’t some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
OK. Thanks for clarifying your position.
At the very lesat, by this point we’ve killed a lot of people. the fact that they’ve been backed up doesn’t make the murder less henious.
Whether or not ‘humanity’ gets destroyed in this scenario depends on the definition that you aply to the word ‘humanity’. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them—the cyberspace backup still have those things (presuming that they’re actually working cyberspace backups).
Well, if nothing else happens our new computer substrate will stop working. But if we remove that problem—in what sense has this not already happened?
If you like, we can assume that Eliezer is wrong about that. In which case, I’ll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.
Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.
Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ?
Now imagine that it’s not a party, but the entire world; and you can’t leave, because it’s everywhere. Would you still “feel altruistic toward humanity” at that point ?
It’s easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them).
I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can’t see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience.
OTOH, I don’t expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that’s kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of “doubled my intelligence” overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences.
If instead of “doubling” you “multiplied by 10″ I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
Wait, why shouldn’t they be conflated ? Granted, an upload does not necessarily have to possess augmented intelligence, but IMO most if not all of them would obtain it in practice.
Agreed, though see above.
I agree completely; that was my point as well.
Edited to add:
I believe that, however incomprehensible one’s new values might be after augmentation, I am reasonably certain that they would not include “an altruistic attitude toward humanity” (as per our current understanding of the term). By analogy, I personally neither love nor hate individual insects; they are too far beneath me.
Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
Turning that analogy around.… I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
With respect to altruism and vast intelligence gulfs more generally… I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it’s only a mild preference; I don’t really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call “intelligence augmentation”, like short-term-memory buffer-size increases, that might well be modular in this way.
More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I’m concerned, since the potential search space is vast (if not infinite).
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.
Most people don’t remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn’s senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There’s not much there that is even worth remembering… and, IMO, there’s a good chance that a transhuman intelligence would feel the same way about its past humanity.
I agree with your later statement:
To expand upon it a bit:
I agree with you regarding the pragmatic stance, but disagree about the “intrinsic value” part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people’s minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.
Like you, I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans, or infants.
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.
Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all “sufficiently augmented” human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.
More generally, when I say I expect my augmented self’s values to be incomprehensible to me, I actually mean it.
Mostly, I think that will depend on what kinds of augmentations we’re talking about. But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
My own intuition, FWIW, is that some such minds will remember their true origins, and others won’t, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why. I am perfectly happy to take your word for it that you don’t value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.
For my own part… well, my mom is not a particularly valuable person, as people go. There’s no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she’s my mom, and I value that about her.
My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they’re my memories of my infancy.
And then again, I might not. (Cf incomprehensible values of augments, above.)
Even if you don’t buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
Fair enough, though we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
It depends on what you mean by “remember”, I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it’s unclear what value they would put on such data, if any.
Maybe it’s just me, but I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s—though, not being a biologist, I could be wrong about that. I’m sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you’ve seen one poorly-resolved bright blob, you’ve seen them all.
By analogy, I ate a banana for breakfast yesterday, but I don’t feel anything special about it. It was a regular banana from the store; once you’ve seen one, you’ve seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).
IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.)
Sure, we could do that, which would give us an implicit notion of “vastly augmented intelligence” as something like naturally occurring geniuses (except on a much larger scale). I don’t think that’s terribly likely, but as I say, I’m happy to posit it for discussion if you like.
I agree that it’s unclear.
To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
And others don’t.
That’s correct; for some reason, I was thinking that you believed that a human’s preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.
I think that childhood blankets and goldfish are different from an infant’s memories, but perhaps this is a topic for another time...
I’m not quite sure what other time you have in mind, but I’m happy to drop the subject. If you want to pick it up some other time feel free.