Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.
I agree completely; that was my point as well.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
By analogy, I personally neither love nor hate individual insects; they are too far beneath me.
Turning that analogy around.… I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
With respect to altruism and vast intelligence gulfs more generally… I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it’s only a mild preference; I don’t really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call “intelligence augmentation”, like short-term-memory buffer-size increases, that might well be modular in this way.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I’m concerned, since the potential search space is vast (if not infinite).
I suspect that if I remembered having been an insect and then later becoming a human being...
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.
Most people don’t remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn’s senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There’s not much there that is even worth remembering… and, IMO, there’s a good chance that a transhuman intelligence would feel the same way about its past humanity.
… and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
I agree with your later statement:
OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly.
To expand upon it a bit:
I agree with you regarding the pragmatic stance, but disagree about the “intrinsic value” part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people’s minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.
Like you, I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans, or infants.
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.
Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all “sufficiently augmented” human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.
More generally, when I say I expect my augmented self’s values to be incomprehensible to me, I actually mean it.
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday.
Mostly, I think that will depend on what kinds of augmentations we’re talking about. But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
My own intuition, FWIW, is that some such minds will remember their true origins, and others won’t, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.
There’s not much there that is even worth remembering.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why. I am perfectly happy to take your word for it that you don’t value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.
For my own part… well, my mom is not a particularly valuable person, as people go. There’s no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she’s my mom, and I value that about her.
My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they’re my memories of my infancy.
And then again, I might not. (Cf incomprehensible values of augments, above.)
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants. Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all.
Even if you don’t buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
Fair enough, though we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
My own intuition, FWIW, is that some such minds will remember their true origins...
It depends on what you mean by “remember”, I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it’s unclear what value they would put on such data, if any.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why.
Maybe it’s just me, but I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s—though, not being a biologist, I could be wrong about that. I’m sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you’ve seen one poorly-resolved bright blob, you’ve seen them all.
By analogy, I ate a banana for breakfast yesterday, but I don’t feel anything special about it. It was a regular banana from the store; once you’ve seen one, you’ve seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).
IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.)
we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
Sure, we could do that, which would give us an implicit notion of “vastly augmented intelligence” as something like naturally occurring geniuses (except on a much larger scale). I don’t think that’s terribly likely, but as I say, I’m happy to posit it for discussion if you like.
it’s unclear what value they would put on such data, if any. [...] I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s [...] IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
I agree that it’s unclear.
To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.
That’s correct; for some reason, I was thinking that you believed that a human’s preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
I think that childhood blankets and goldfish are different from an infant’s memories, but perhaps this is a topic for another time...
Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.
Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn’t contain.
Turning that analogy around.… I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects’ favor.
With respect to altruism and vast intelligence gulfs more generally… I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it’s only a mild preference; I don’t really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we’re calling “intelligence augmentation”, as I don’t actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call “intelligence augmentation”, like short-term-memory buffer-size increases, that might well be modular in this way.
More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I’m concerned, since the potential search space is vast (if not infinite).
I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.
Most people don’t remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn’s senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There’s not much there that is even worth remembering… and, IMO, there’s a good chance that a transhuman intelligence would feel the same way about its past humanity.
I agree with your later statement:
To expand upon it a bit:
I agree with you regarding the pragmatic stance, but disagree about the “intrinsic value” part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people’s minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.
Like you, I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans, or infants.
Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.
Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all “sufficiently augmented” human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.
More generally, when I say I expect my augmented self’s values to be incomprehensible to me, I actually mean it.
Mostly, I think that will depend on what kinds of augmentations we’re talking about. But I don’t think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of “vastly augmented” and analogies to insects and protozoa, so I’m content to posit either that it does, or that it doesn’t, whichever suits you.
My own intuition, FWIW, is that some such minds will remember their true origins, and others won’t, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.
You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It’s not at all clear to me why. I am perfectly happy to take your word for it that you don’t value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.
For my own part… well, my mom is not a particularly valuable person, as people go. There’s no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she’s my mom, and I value that about her.
My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they’re my memories of my infancy.
And then again, I might not. (Cf incomprehensible values of augments, above.)
Even if you don’t buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like “unaugmented humans” end up being ?
Fair enough, though we could probably obtain some clues by surveying the incredibly smart—though merely human—geniuses that do exist in our current world, and extrapolating from there.
It depends on what you mean by “remember”, I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it’s unclear what value they would put on such data, if any.
Maybe it’s just me, but I don’t think that my own, personal memories of my own, personal infancy would differ greatly from anyone else’s—though, not being a biologist, I could be wrong about that. I’m sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you’ve seen one poorly-resolved bright blob, you’ve seen them all.
By analogy, I ate a banana for breakfast yesterday, but I don’t feel anything special about it. It was a regular banana from the store; once you’ve seen one, you’ve seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).
IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you’ve seen one human, you’ve seen them all, plus or minus some minor details...
Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a “sufficiently augmented” human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we’re simply not understanding one another.)
Sure, we could do that, which would give us an implicit notion of “vastly augmented intelligence” as something like naturally occurring geniuses (except on a much larger scale). I don’t think that’s terribly likely, but as I say, I’m happy to posit it for discussion if you like.
I agree that it’s unclear.
To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.
The thing is, some adult humans do value their childhood blankets, or one particular goldfish.
And others don’t.
That’s correct; for some reason, I was thinking that you believed that a human’s preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.
I think that childhood blankets and goldfish are different from an infant’s memories, but perhaps this is a topic for another time...
I’m not quite sure what other time you have in mind, but I’m happy to drop the subject. If you want to pick it up some other time feel free.