Robin Hanson thinks that strong cooperation within copy clans won’t have a huge impact because there will still be a tradeoff between cooperation and specialization. But if the clan consists of copies of someone like John von Neumann, it can easily best world-class specialists in every field, just by forking a bunch of copies and having each copy take a few subjective months to read up on one field and do a bit of practicing. There is little need for such a clan to cooperate with outsiders (except maybe investors/donors for the initial capital) and I don’t see what can prevent it from taking over the world as a singleton once it comes into existence.
I would think that we would probably want to have cultural, ethical, and legal rules against infinitely copying yourself. For one thing, that leads to the rather dystpoian situation Robert Hanson was talking about; and for another, it would lead to a rapidly diminishing amount of variety among humans, which would be sad. One or two copies of you might be ok, but would you really want to live in a world where there are billions of copies of you, billions of copies of Von Neumann, and almost no one else to talk to? Remember, you are now immortal, and the amount of subjective time you are going to live is going to be vast; boredom could be a huge problem, and you would want a huge variety of people to interact with and be social with, wouldn’t you?
I really think that we wouldn’t want to allow an large amount of copying of the exact same mind to happen.
Over the course of even a few centuries of subjective existence, I expect the divergence experienced by copies of me would be sufficient to keep me entertained.
My point was that divergence is rapid enough that a few centuries would suffice to create significant diversity. Over an eternity, it would create maximal diversity, but of course that’s not what you’re asking: you’re asking whether the diversity created by copies of me would keep me entertained indefinitely.
And of course I don’t know, but my (largely unjustified) intuition is that no, it wouldn’t. .
That said, my intuition is also that the diversity created by an arbitrary number of other people will also be insufficient to keep me entertained indefinitely, so eternal entertainment is hardly a reason to care about whether I have “someone else” to talk to.
Eh. “Talk to” in this case is quite broad. It doesn’t just literally mean actually talking to them; it means reading books that you wouldn’t have written that contain ideas you wouldn’t have thought of, seeing movies that you wouldn’t have filmed, playing games from very different points of view, ect. If all culture, music, art, and idea all came from minds that were very similar, I think it would tend to get boring much more quickly.
I do think there is a significant risk that an individual or a society might drift into a stable, stagnant, boring state over time, and interaction and social friction with a variety of fundamentally different minds could be a huge effect on that.
And, of course, there is a more practical reason why everyone would want to ban massive copying of single minds, which is that it would dramatically reduce the resources and standard of living of any one mind. A society of EM’s would urgently need some form of population control.
Edit: By the way, there also wouldn’t necessarily be even the limited diversity you might expect from having different versions of you diverge over centuries. In the kind of environment we’re talking about here, less competitive versions of you would also be wiped out by more competitive versions of you, leaving only a very narrow band of the diversity you yourself would be capable of becoming.
Completely agreed about “talk to” being metaphorical. Indeed, it would astonish me if over the course of few centuries of the kind of technological development implied by whole-brain-emulation, we didn’t develop means of interaction with ourselves and others that made the whole notion of concerning ourselves with identity boundaries in the first place a barely intelligible historical artifact for most minds. But I digress.
That aside, I agree that interaction with “fundamentally different minds” could have a huge effect on our tendency to stagnate, but the notion that other humans have fundamentally different minds in that sense just seems laughably implausible to me. If we want to interact with fundamentally different minds over the long haul, I think our best bet will be to create them.
More generally, it sounds like we just have very different intuitions about how much diversity there is among individual minds today, relative to how much diversity there is within a single mind today. I don’t have any particularly compelling additional evidence to offer here, so I think my best move is to accept as additional evidence that your expectations about this differ from mine.
As far as the population problem is concerned, I agree, but this has nothing to do with duplicates-vs-”originals”. Distributing resources among N entities reduces the average resources available to one entity, regardless of the nature of the entities. Copying a person is no worse than making a person by any other means from this perspective.
I agree that if we are constantly purging most variation, the variation at any given moment will be small. (Of course, if we’re right about the value of variation, it seems to follow that variation will therefore be a rare and valued commodity, which might increase the competitiveness of otherwise-less-competitive individuals.)
More generally, it sounds like we just have very different intuitions about how much diversity there is among individual minds today, relative to how much diversity there is within a single mind today. I don’t have any particularly compelling additional evidence to offer here, so I think my best move is to accept as additional evidence that your expectations about this differ from mine.
Well, if Eliezer’s FAI theory is correct, then the possible end states of any mind capable of deliberate self-modification should be significantly limited by the values, wants, and desires of the initial state of the mind. If that’s the case, then there are whole vast ares of potential human-mind-space that you or your decedents would never move into or through because of their values, while EM derived from another human might.
As for your other point; you are right that duplicates vs. originals doesn’t necessarily make a difference in terms of population, but it may; theoretically, at least at the subjective speeds of the EM, it should be much faster to make an exact copy of you then to make a “child” EM and raise it to adulthood. And if you are the type of person who wants to make lots of copies of yourself, then all those copies will also want to make lots of copies; if a culture of EM’s allows infinite duplication of self things could get out of control very quickly.
If that’s the case, then there are whole vast ares of potential human-mind-space that you or your decedents would never move into or through because of their values, while EM derived from another human might.
We seem to keep trading vague statements about our intuitions back and forth, so let me try to get us a bit more concrete, and maybe that will help us move past that.
For convenient reference, call D the subset of mind-space that I and my descendents can potentially move through or into, and H the subset of mind-space that some human can potentially move through or into. I completely agree that H is larger than D.
What’s your estimate of H/D? My intuitive sense is that it’s <2.
it should be much faster to make an exact copy of you then to make a “child” EM and raise it to adulthood. And if you are the type of person who wants to make lots of copies of yourself, then all those copies will also want to make lots of copies
Even if I grant all of that, it still seems that what’s necessary (supposing there’s a resource limit here) is to limit the number of people I create, not to prevent me from making clones. If I want to take my allotment of creatable people and spend it on creating clones rather than creating “children,” how does that affect you, as long as that allotment is set sensibly? Conversely, if I overflow that allotment with “children”, how does that affect you less than if I’d overflowed it with clones?
Put differently: once we effectively restrict the growth rate, we no longer have to be concerned with which factors would have been correlated with higher growth rate had we not restricted it.
What’s your estimate of H/D?
My intuitive sense is that it’s <2.
I would think it’s far higher then that. Probably H/D>100, and it might be far higher then that. I tend to think that maintaining some continuity of identity would be very important to uploaded minds (because, honestly, isn’t that the whole point of uploading your mind instead of just emulating a random human-like mind?). I also tend to think that there are vast categorizes of experiences that you would not put yourself through just so you could be the kind of person who had been through that experience; if there are mind-states that can only be reached by, say, “losing a child and then overcoming that horrible experience after years of grieving through developing a kind of inner strength”, the I can’t imagine any mind would intentionally do that to themselves just to explore more sections of mind-space.
Or, think about it in terms of beliefs. Say that mind A is an atheist. Do you think that the person who has mind A would ever intentionally turn themselves into a theist or into a spiritualist just in order to experience those emotions, and to get to places in mind-space that can only be reached from there? Judging from the whole of human experience, by just preventing yourself from going that route, you’re probably eliminating at least half of all mind-states that a normal human can reach; many of those mind-states being states that can apparently produce incredibly interesting culture, music, literature, art, ect. Not to mention all the possible mind-states that can only be reached by being a former theist who has lost his faith. And that’s just one example; there are probably dozens or hundreds of beliefs, values, and worldviews that any mind has that it would never want to change, because they are simply too fundamental to that mind’s basic identity. Even with basic things; Eliezer once mentioned, when talking about FAI theory. “My name is Eliezer Yudkowsky. Perhaps it would be easier if my name was something shorter and easier to remember, but I don’t want to change my name. And I don’t want to change into a person who would want to change my name.” (That’s not an exact quote, but it was something along those lines.) I would be surprised if any decent of your mind would ever get to even 1% of all possible human mind-space.
Not only that, if mind A has a certain set of values and beliefs, and then you make a million copies of mind A and they all interact with each other all the time, I would think that would tend to discourage any of them from changing or questioning those values or beliefs. Usually the main way people change their minds is when they encounter someone with fundamentally different beliefs who seems to be intelligent and worth listening to; on the other hand, if you surround yourself with only people who believe the same thing you do, you are very unlikely to ever change that belief; if anything, social pressure would likely lock it into place. Therefore, I would say that a mind that primarily interacts with other copes of itself would be far more likely to become static and unchanging then that same mind in an environment where it is interacting with other minds with different beliefs.
I can’t imagine any mind would intentionally do that to themselves just to explore more sections of mind-space.
Mm. That’s interesting. While I can’t imagine actually arranging for my child to die in order to explore that experience, I can easily imagine going through that experience (e.g., with some kind of simulated person) if I thought I had a reasonable chance of learning something worthwhile in the process, if I were living in a post-scarcity kind of environment.
I can similarly easily imagine myself temporarily adopting various forms of theism, atheism, former-theism, and all kinds of other mental states.
And I can even more easily imagine encouraging clones of myself to do so, or choosing to do so when there’s a community of clones of myself already exploring other available paths. Why choose a path that’s already being explored by someone else?
It sounds like we’re both engaging in mind projection here… you can’t imagine a mind being willing to choose these sorts of many-sigmas-out experiences, so you assume a population of clone-minds would stick pretty close to a norm; I can easily imagine a mind choosing them, so I assume a population of clone-minds would cover most of the available space.
And it may well be that you’re more correct about what clones of an arbitrarily chosen mind would be like… that is, I may just be an aberrant data point.
I can easily imagine a mind choosing them, so I assume a population of clone-minds would cover most of the available space.
Ok, so let’s say for the sake of arguments that you’re more flexible about such things then 90% of the population is. If so, would you be willing to modify yourself into someone less flexible, into someone who never would want to change himself? If you don’t, then you’ve just locked yourself out of about 90% of all possible mindspace on that one issue alone. However, if you do, then you’re probably stuck in that state for good; the new you probably wouldn’t want to change back.
Absolutely… temporarily being far more rigid-minded than I am would be fascinating. And knowing that the alarm was ticking and that I was going to return to being my ordinary way of being would likely be deliciously terrifying, like a serious version of a roller coaster.
But, sure, if we posit that the technology is limited such that temporary changes of this sort aren’t possible, then I wouldn’t do that if I were the only one of me… though if there were a million of me around, I might.
Dissensus is one of the few nearly-universally effective insurance policies.
Or possibly a good way to get everyone killed. For example suppose any sufficiently intelligent being can build a device to trigger a false vacuum catastrophe.
Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources. If the most efficient way to use them is to parallelize many small instances, then that’s what they’ll do. How do you stop people from running whatever code they please?
Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources.
I don’t see any reason why a society of computing, intelligent, computation-based agents wouldn’t be able to prevent any single computation-based agent from doing something they want to make illegal. You don’t need a singleton, a society of laws probably works just fine.
And, in fact, you would probably have to have laws and things like that, unless you want other people hacking into your mind.
For society to be sure of what code you’re running, they need to enforce transparency that ultimately extends to the physical, hardware level. Even if there are laws, to enforce them I need to know you haven’t secretly built custom hardware that would give you an illegal advantage, which falsely reports that it’s running something else and legal. In the limit of a nano-technology-based, AGI scenario, this means verifying the actual configurations of atoms of all matter everyone controls.
A singleton isn’t required, but it seems like the only stable solution.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
It depends on a lot of variables, of course, most of which we don’t know yet. But, hypothetically speaking, if the society of EM’s we’re talking about are running on the same network (or the same mega-computer, or whatever), then it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
That’s a tradeoff vs. the benefit to a criminal who isn’t caught from the crime. The benefit here could be enormous.
it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
I was assuming that creating illegal copies lets you use the same resources more intelligently, and profit more for them. Also, if your only measurable is the amount of resource use and not the exact kind of use (because you don’t have radical transparency), then people could acquire resources first and convert them to illegal use later.
Network resources are externally visible, but the exact code you’re running internally isn’t. You can purchase resources first and illegally repurpose them later, etc.
von Neumann was very smart, but I very much doubt he would have been better than everyone at all jobs if trained in those jobs. There is still comparative advantage, even among the very smartest and most capable.
Robin Hanson thinks that strong cooperation within copy clans won’t have a huge impact because there will still be a tradeoff between cooperation and specialization. But if the clan consists of copies of someone like John von Neumann, it can easily best world-class specialists in every field, just by forking a bunch of copies and having each copy take a few subjective months to read up on one field and do a bit of practicing. There is little need for such a clan to cooperate with outsiders (except maybe investors/donors for the initial capital) and I don’t see what can prevent it from taking over the world as a singleton once it comes into existence.
I would think that we would probably want to have cultural, ethical, and legal rules against infinitely copying yourself. For one thing, that leads to the rather dystpoian situation Robert Hanson was talking about; and for another, it would lead to a rapidly diminishing amount of variety among humans, which would be sad. One or two copies of you might be ok, but would you really want to live in a world where there are billions of copies of you, billions of copies of Von Neumann, and almost no one else to talk to? Remember, you are now immortal, and the amount of subjective time you are going to live is going to be vast; boredom could be a huge problem, and you would want a huge variety of people to interact with and be social with, wouldn’t you?
I really think that we wouldn’t want to allow an large amount of copying of the exact same mind to happen.
Over the course of even a few centuries of subjective existence, I expect the divergence experienced by copies of me would be sufficient to keep me entertained.
A few centuries, sure, but how many eternities?
My point was that divergence is rapid enough that a few centuries would suffice to create significant diversity. Over an eternity, it would create maximal diversity, but of course that’s not what you’re asking: you’re asking whether the diversity created by copies of me would keep me entertained indefinitely.
And of course I don’t know, but my (largely unjustified) intuition is that no, it wouldn’t. .
That said, my intuition is also that the diversity created by an arbitrary number of other people will also be insufficient to keep me entertained indefinitely, so eternal entertainment is hardly a reason to care about whether I have “someone else” to talk to.
Eh. “Talk to” in this case is quite broad. It doesn’t just literally mean actually talking to them; it means reading books that you wouldn’t have written that contain ideas you wouldn’t have thought of, seeing movies that you wouldn’t have filmed, playing games from very different points of view, ect. If all culture, music, art, and idea all came from minds that were very similar, I think it would tend to get boring much more quickly.
I do think there is a significant risk that an individual or a society might drift into a stable, stagnant, boring state over time, and interaction and social friction with a variety of fundamentally different minds could be a huge effect on that.
And, of course, there is a more practical reason why everyone would want to ban massive copying of single minds, which is that it would dramatically reduce the resources and standard of living of any one mind. A society of EM’s would urgently need some form of population control.
Edit: By the way, there also wouldn’t necessarily be even the limited diversity you might expect from having different versions of you diverge over centuries. In the kind of environment we’re talking about here, less competitive versions of you would also be wiped out by more competitive versions of you, leaving only a very narrow band of the diversity you yourself would be capable of becoming.
Completely agreed about “talk to” being metaphorical. Indeed, it would astonish me if over the course of few centuries of the kind of technological development implied by whole-brain-emulation, we didn’t develop means of interaction with ourselves and others that made the whole notion of concerning ourselves with identity boundaries in the first place a barely intelligible historical artifact for most minds. But I digress.
That aside, I agree that interaction with “fundamentally different minds” could have a huge effect on our tendency to stagnate, but the notion that other humans have fundamentally different minds in that sense just seems laughably implausible to me. If we want to interact with fundamentally different minds over the long haul, I think our best bet will be to create them.
More generally, it sounds like we just have very different intuitions about how much diversity there is among individual minds today, relative to how much diversity there is within a single mind today. I don’t have any particularly compelling additional evidence to offer here, so I think my best move is to accept as additional evidence that your expectations about this differ from mine.
As far as the population problem is concerned, I agree, but this has nothing to do with duplicates-vs-”originals”. Distributing resources among N entities reduces the average resources available to one entity, regardless of the nature of the entities. Copying a person is no worse than making a person by any other means from this perspective.
I agree that if we are constantly purging most variation, the variation at any given moment will be small. (Of course, if we’re right about the value of variation, it seems to follow that variation will therefore be a rare and valued commodity, which might increase the competitiveness of otherwise-less-competitive individuals.)
Well, if Eliezer’s FAI theory is correct, then the possible end states of any mind capable of deliberate self-modification should be significantly limited by the values, wants, and desires of the initial state of the mind. If that’s the case, then there are whole vast ares of potential human-mind-space that you or your decedents would never move into or through because of their values, while EM derived from another human might.
As for your other point; you are right that duplicates vs. originals doesn’t necessarily make a difference in terms of population, but it may; theoretically, at least at the subjective speeds of the EM, it should be much faster to make an exact copy of you then to make a “child” EM and raise it to adulthood. And if you are the type of person who wants to make lots of copies of yourself, then all those copies will also want to make lots of copies; if a culture of EM’s allows infinite duplication of self things could get out of control very quickly.
We seem to keep trading vague statements about our intuitions back and forth, so let me try to get us a bit more concrete, and maybe that will help us move past that.
For convenient reference, call D the subset of mind-space that I and my descendents can potentially move through or into, and H the subset of mind-space that some human can potentially move through or into.
I completely agree that H is larger than D.
What’s your estimate of H/D?
My intuitive sense is that it’s <2.
Even if I grant all of that, it still seems that what’s necessary (supposing there’s a resource limit here) is to limit the number of people I create, not to prevent me from making clones. If I want to take my allotment of creatable people and spend it on creating clones rather than creating “children,” how does that affect you, as long as that allotment is set sensibly? Conversely, if I overflow that allotment with “children”, how does that affect you less than if I’d overflowed it with clones?
Put differently: once we effectively restrict the growth rate, we no longer have to be concerned with which factors would have been correlated with higher growth rate had we not restricted it.
I would think it’s far higher then that. Probably H/D>100, and it might be far higher then that. I tend to think that maintaining some continuity of identity would be very important to uploaded minds (because, honestly, isn’t that the whole point of uploading your mind instead of just emulating a random human-like mind?). I also tend to think that there are vast categorizes of experiences that you would not put yourself through just so you could be the kind of person who had been through that experience; if there are mind-states that can only be reached by, say, “losing a child and then overcoming that horrible experience after years of grieving through developing a kind of inner strength”, the I can’t imagine any mind would intentionally do that to themselves just to explore more sections of mind-space.
Or, think about it in terms of beliefs. Say that mind A is an atheist. Do you think that the person who has mind A would ever intentionally turn themselves into a theist or into a spiritualist just in order to experience those emotions, and to get to places in mind-space that can only be reached from there? Judging from the whole of human experience, by just preventing yourself from going that route, you’re probably eliminating at least half of all mind-states that a normal human can reach; many of those mind-states being states that can apparently produce incredibly interesting culture, music, literature, art, ect. Not to mention all the possible mind-states that can only be reached by being a former theist who has lost his faith. And that’s just one example; there are probably dozens or hundreds of beliefs, values, and worldviews that any mind has that it would never want to change, because they are simply too fundamental to that mind’s basic identity. Even with basic things; Eliezer once mentioned, when talking about FAI theory. “My name is Eliezer Yudkowsky. Perhaps it would be easier if my name was something shorter and easier to remember, but I don’t want to change my name. And I don’t want to change into a person who would want to change my name.” (That’s not an exact quote, but it was something along those lines.) I would be surprised if any decent of your mind would ever get to even 1% of all possible human mind-space.
Not only that, if mind A has a certain set of values and beliefs, and then you make a million copies of mind A and they all interact with each other all the time, I would think that would tend to discourage any of them from changing or questioning those values or beliefs. Usually the main way people change their minds is when they encounter someone with fundamentally different beliefs who seems to be intelligent and worth listening to; on the other hand, if you surround yourself with only people who believe the same thing you do, you are very unlikely to ever change that belief; if anything, social pressure would likely lock it into place. Therefore, I would say that a mind that primarily interacts with other copes of itself would be far more likely to become static and unchanging then that same mind in an environment where it is interacting with other minds with different beliefs.
Mm. That’s interesting. While I can’t imagine actually arranging for my child to die in order to explore that experience, I can easily imagine going through that experience (e.g., with some kind of simulated person) if I thought I had a reasonable chance of learning something worthwhile in the process, if I were living in a post-scarcity kind of environment.
I can similarly easily imagine myself temporarily adopting various forms of theism, atheism, former-theism, and all kinds of other mental states.
And I can even more easily imagine encouraging clones of myself to do so, or choosing to do so when there’s a community of clones of myself already exploring other available paths. Why choose a path that’s already being explored by someone else?
It sounds like we’re both engaging in mind projection here… you can’t imagine a mind being willing to choose these sorts of many-sigmas-out experiences, so you assume a population of clone-minds would stick pretty close to a norm; I can easily imagine a mind choosing them, so I assume a population of clone-minds would cover most of the available space.
And it may well be that you’re more correct about what clones of an arbitrarily chosen mind would be like… that is, I may just be an aberrant data point.
Ok, so let’s say for the sake of arguments that you’re more flexible about such things then 90% of the population is. If so, would you be willing to modify yourself into someone less flexible, into someone who never would want to change himself? If you don’t, then you’ve just locked yourself out of about 90% of all possible mindspace on that one issue alone. However, if you do, then you’re probably stuck in that state for good; the new you probably wouldn’t want to change back.
Absolutely… temporarily being far more rigid-minded than I am would be fascinating. And knowing that the alarm was ticking and that I was going to return to being my ordinary way of being would likely be deliciously terrifying, like a serious version of a roller coaster.
But, sure, if we posit that the technology is limited such that temporary changes of this sort aren’t possible, then I wouldn’t do that if I were the only one of me… though if there were a million of me around, I might.
Not to mention dangerous. Dissensus is one of the few nearly-universally effective insurance policies.
Or possibly a good way to get everyone killed. For example suppose any sufficiently intelligent being can build a device to trigger a false vacuum catastrophe.
Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources. If the most efficient way to use them is to parallelize many small instances, then that’s what they’ll do. How do you stop people from running whatever code they please?
I don’t see any reason why a society of computing, intelligent, computation-based agents wouldn’t be able to prevent any single computation-based agent from doing something they want to make illegal. You don’t need a singleton, a society of laws probably works just fine.
And, in fact, you would probably have to have laws and things like that, unless you want other people hacking into your mind.
For society to be sure of what code you’re running, they need to enforce transparency that ultimately extends to the physical, hardware level. Even if there are laws, to enforce them I need to know you haven’t secretly built custom hardware that would give you an illegal advantage, which falsely reports that it’s running something else and legal. In the limit of a nano-technology-based, AGI scenario, this means verifying the actual configurations of atoms of all matter everyone controls.
A singleton isn’t required, but it seems like the only stable solution.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
It depends on a lot of variables, of course, most of which we don’t know yet. But, hypothetically speaking, if the society of EM’s we’re talking about are running on the same network (or the same mega-computer, or whatever), then it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
That’s a tradeoff vs. the benefit to a criminal who isn’t caught from the crime. The benefit here could be enormous.
I was assuming that creating illegal copies lets you use the same resources more intelligently, and profit more for them. Also, if your only measurable is the amount of resource use and not the exact kind of use (because you don’t have radical transparency), then people could acquire resources first and convert them to illegal use later.
Network resources are externally visible, but the exact code you’re running internally isn’t. You can purchase resources first and illegally repurpose them later, etc.
von Neumann was very smart, but I very much doubt he would have been better than everyone at all jobs if trained in those jobs. There is still comparative advantage, even among the very smartest and most capable.