Our goals are going to align pretty well with (what we’d call) evolution’s right up until we decide to genetically engineer ourselves, or upload our minds onto a non-biological substrate, at which point evolution will be toast.
I would agree that we’ve stayed pretty aligned so far (although we are currently woefully under-utilizing our ability to create more humans), and that humans are better-designed to have robust goals than current AI systems. But we’re not robust enough to make me just want to copy the human design into AIs.
The niceness thing is actually kind of a fluke—a fluke that has solid reasons for why it makes sense in an ancestral environment, but which we’ve taken farther, and decided simultaneously that we value for its own sake. More or less drawing the target after the bullet has been fired. Unless the AI evolves in a human-like way (a hugely expensive feat that other groups will try to avoid), human niceness is actually a cautionary tale about motivations popping up that defy evolution’s incentives.
I suspect (but can’t prove) that most people would not upload themselves to non-biological substrate if given the choice—only 27% of philosophers[1] believe that uploading your brain would mean that you survive on the non-biological substrate. I also suspect that people would not engineer the desire to have kids out of themselves. If most people want to have kids, I don’t think we can assume that they would change that desire, a bit like we don’t expect very powerful AGIs to allow themselves to be modified. The closest I can think of right now would be that I could take drugs that would completely kill my sex drive, and almost no one would do that willingly, although that probably has other horrible side-effects.
If humans turn out to be misaligned in that way—we modify ourselves completely out of alignment with “evolution’s wishes”—that would tell us something about the alignment of intelligent systems, but I think so far people have shown no willingness to do that sort of thing.
The point about genetic engineering isn’t anything to do with not having kids. It’s about not propagating your own genome.
Kinda like uploading, we would keep “having kids” in the human sense, but not in the sense used by evolution for the last few billion years. It’s easy to slip between these by anthropomorphizing evolution (choosing “sensible” goals for it, conforming to human sensibilities), but worth resisting. In the analogy to AI, we wouldn’t be satisfied if it reinterpreted everything we tried to teach it about morality in the way we’re “reinterpreting evolution” even today.
So like a couple would decide to have kids and they would just pick a set of genes entirely unrelated to theirs to maximise whatever characteristics they valued?
If I understand it correctly, I still feel like most people would choose not to do this, a lot of people seem against even minor genetic engineering, let alone something as major as that. I do understand a lot of the reticence towards genetic engineering has other sources besides “this wouldn’t feel like my child, it’s hard to make any clear predictions.
Yeah, anthropomorphising evolution is pretty iffy, I guess in this situation I’m imagining we’re evolution and we create a human race with the goal of replicating a bunch of DNA sequences that starts doing all sorts of wild things we didn’t predict. I still think I’d be more pleased with the outcome here than what a lot of current thinking on AGIs predicts we will be once we create a capable enough AGI. We do propagate our little DNA sequences, not as ambitiously as we perhaps could, but also responsibly enough that we aren’t destroying absolutely everything in our path. I don’t see this as a whole-sale reinterpreting of what evolution “wants”, more of a not very zealous approach to achieving it.
A bit like if I made a very capable paper clip making AI and it made only a few million paperclips and then got distracted watching YouTube and only making some paperclips every now and then. Not ideal, but better than annihilation.
This is probably more due to uploading being outside the overton window than anything. The existence of large numbers of sci fi enthusiasts and transhumanists who think otherwise implies that this is a matter of culture and perhaps education, not anything innate to humans. I personally want to recycle these atoms and live in a more durable substrate as soon as it is safe to do so. But this is because I am a bucket of memes, not a bucket of genes; memes won the evolution game a long time ago, and from their perspective, my goals are perfectly aligned.
Also, I think the gene-centered view is shortsighted. Phenotypes are units of selection as much as genes are; they propagate themselves by means of genes the same way genes propagate themselves by means of phenotypes. It’s just that historically genes had much more power over this transaction. Even I do not want to let go of my human shape entirely—though I will after uploading experiment with other shapes as well—so the human phenotype retains plenty of evolutionary fitness into the future.
So if I upload my brain onto silicon, but don’t destroy my meat self in the process, how is the one in the silicon me? Would I feel the qualia of the silicon me? Should I feel better about being killed after I’ve done this process? I really don’t think it’s a matter of the Overton window, people do have an innate desire not to die, and unless I’m missing something this process seems a lot like dying with a copy somewhere.
I’m talking about gradual uploading. Replacing neurons in the brain with computationally identical units of some other computing substrate gradually, one by one, while the patient is awake and is able to describe any changes in consciousness and clearly state if something is wrong so that it can be reversed. Not copying or any other such thing.
Ah I do personally find that a lot better than wholesale uploading, but even then I’d stop short of complete replacement. I would be too afraid that without noticing I would lose my subjective experience—the people doing the procedure would never know the difference. Additionally, I think for a lot of people if such a procedure would stop them from having kids they wouldn’t want to do it. Somewhat akin to having kids with a completely new genetic code, most people seem to not want that. Hard to predict the exact details of these procedures and what public opinion will be of them, but it would only take some people to consistently refuse for their genes to keep propagating.
I feel like “losing subjective experience without noticing” is somehow paradoxical. I don’t believe that that’s a thing that can conceivably happen. And I really don’t understand the kids thing. But I’ve never cared about having children and the instinct makes no sense to me so maybe you’re right.
Our goals are going to align pretty well with (what we’d call) evolution’s right up until we decide to genetically engineer ourselves, or upload our minds onto a non-biological substrate, at which point evolution will be toast.
I would agree that we’ve stayed pretty aligned so far (although we are currently woefully under-utilizing our ability to create more humans), and that humans are better-designed to have robust goals than current AI systems. But we’re not robust enough to make me just want to copy the human design into AIs.
The niceness thing is actually kind of a fluke—a fluke that has solid reasons for why it makes sense in an ancestral environment, but which we’ve taken farther, and decided simultaneously that we value for its own sake. More or less drawing the target after the bullet has been fired. Unless the AI evolves in a human-like way (a hugely expensive feat that other groups will try to avoid), human niceness is actually a cautionary tale about motivations popping up that defy evolution’s incentives.
I suspect (but can’t prove) that most people would not upload themselves to non-biological substrate if given the choice—only 27% of philosophers[1] believe that uploading your brain would mean that you survive on the non-biological substrate. I also suspect that people would not engineer the desire to have kids out of themselves. If most people want to have kids, I don’t think we can assume that they would change that desire, a bit like we don’t expect very powerful AGIs to allow themselves to be modified. The closest I can think of right now would be that I could take drugs that would completely kill my sex drive, and almost no one would do that willingly, although that probably has other horrible side-effects.
If humans turn out to be misaligned in that way—we modify ourselves completely out of alignment with “evolution’s wishes”—that would tell us something about the alignment of intelligent systems, but I think so far people have shown no willingness to do that sort of thing.
[1]https://survey2020.philpeople.org/survey/results/5094
The point about genetic engineering isn’t anything to do with not having kids. It’s about not propagating your own genome.
Kinda like uploading, we would keep “having kids” in the human sense, but not in the sense used by evolution for the last few billion years. It’s easy to slip between these by anthropomorphizing evolution (choosing “sensible” goals for it, conforming to human sensibilities), but worth resisting. In the analogy to AI, we wouldn’t be satisfied if it reinterpreted everything we tried to teach it about morality in the way we’re “reinterpreting evolution” even today.
So like a couple would decide to have kids and they would just pick a set of genes entirely unrelated to theirs to maximise whatever characteristics they valued?
If I understand it correctly, I still feel like most people would choose not to do this, a lot of people seem against even minor genetic engineering, let alone something as major as that. I do understand a lot of the reticence towards genetic engineering has other sources besides “this wouldn’t feel like my child, it’s hard to make any clear predictions.
Yeah, anthropomorphising evolution is pretty iffy, I guess in this situation I’m imagining we’re evolution and we create a human race with the goal of replicating a bunch of DNA sequences that starts doing all sorts of wild things we didn’t predict. I still think I’d be more pleased with the outcome here than what a lot of current thinking on AGIs predicts we will be once we create a capable enough AGI. We do propagate our little DNA sequences, not as ambitiously as we perhaps could, but also responsibly enough that we aren’t destroying absolutely everything in our path. I don’t see this as a whole-sale reinterpreting of what evolution “wants”, more of a not very zealous approach to achieving it.
A bit like if I made a very capable paper clip making AI and it made only a few million paperclips and then got distracted watching YouTube and only making some paperclips every now and then. Not ideal, but better than annihilation.
This is probably more due to uploading being outside the overton window than anything. The existence of large numbers of sci fi enthusiasts and transhumanists who think otherwise implies that this is a matter of culture and perhaps education, not anything innate to humans. I personally want to recycle these atoms and live in a more durable substrate as soon as it is safe to do so. But this is because I am a bucket of memes, not a bucket of genes; memes won the evolution game a long time ago, and from their perspective, my goals are perfectly aligned.
Also, I think the gene-centered view is shortsighted. Phenotypes are units of selection as much as genes are; they propagate themselves by means of genes the same way genes propagate themselves by means of phenotypes. It’s just that historically genes had much more power over this transaction. Even I do not want to let go of my human shape entirely—though I will after uploading experiment with other shapes as well—so the human phenotype retains plenty of evolutionary fitness into the future.
So if I upload my brain onto silicon, but don’t destroy my meat self in the process, how is the one in the silicon me? Would I feel the qualia of the silicon me? Should I feel better about being killed after I’ve done this process? I really don’t think it’s a matter of the Overton window, people do have an innate desire not to die, and unless I’m missing something this process seems a lot like dying with a copy somewhere.
I’m talking about gradual uploading. Replacing neurons in the brain with computationally identical units of some other computing substrate gradually, one by one, while the patient is awake and is able to describe any changes in consciousness and clearly state if something is wrong so that it can be reversed. Not copying or any other such thing.
Ah I do personally find that a lot better than wholesale uploading, but even then I’d stop short of complete replacement. I would be too afraid that without noticing I would lose my subjective experience—the people doing the procedure would never know the difference. Additionally, I think for a lot of people if such a procedure would stop them from having kids they wouldn’t want to do it. Somewhat akin to having kids with a completely new genetic code, most people seem to not want that. Hard to predict the exact details of these procedures and what public opinion will be of them, but it would only take some people to consistently refuse for their genes to keep propagating.
I feel like “losing subjective experience without noticing” is somehow paradoxical. I don’t believe that that’s a thing that can conceivably happen. And I really don’t understand the kids thing. But I’ve never cared about having children and the instinct makes no sense to me so maybe you’re right.