There’s an idea I’ve seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don’t become people in their own right.
I think this is bunk. Consider the following:
--
Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.
Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?
Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won’t readjust the sync on an ongoing basis; it’s just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)
Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation—emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they’re embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.
What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?
--
No, of course not. And, on reflection, it’s obvious that we never did: redundant computation is not additional computation.
So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term—the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people—perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.
But consider, for a moment, if we were not talking about people but—say—works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.
The attitude I’ve seen seems to treat people as a special case—as a separate magisterium.
--
I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.
If you really believed that, you’d try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.
You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.
I have for a while had a feeling that the moral value of a being’s existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where “has something to do with” = it’s somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that’s bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn’t seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus’s brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being’s value has something to do with the degree to which its mind’s unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)
I think… there’s more to this wrongness-feeling I have than I’ve expressed. I would readily subject a million forks of myself to horrific suffering for the moderate benefit of just one of me. The main reason I’d have reservations about releasing myself on the internet for anyone to download would be because they could learn how to manipulate me. The main problem I have with slavery and starvation is that they’re a waste of human resources, and that monolithic power structures are brittle against black swans. In short, I don’t consider it a moral issue what algorithm is computed to produce a particular result.
There’s an idea I’ve seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don’t become people in their own right.
I think this is bunk. Consider the following:
--
Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.
Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?
Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won’t readjust the sync on an ongoing basis; it’s just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)
Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation—emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they’re embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.
What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?
--
No, of course not. And, on reflection, it’s obvious that we never did: redundant computation is not additional computation.
So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term—the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people—perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.
But consider, for a moment, if we were not talking about people but—say—works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.
The attitude I’ve seen seems to treat people as a special case—as a separate magisterium.
--
I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.
If you really believed that, you’d try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.
You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.
I have for a while had a feeling that the moral value of a being’s existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where “has something to do with” = it’s somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that’s bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn’t seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus’s brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being’s value has something to do with the degree to which its mind’s unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)
I think… there’s more to this wrongness-feeling I have than I’ve expressed. I would readily subject a million forks of myself to horrific suffering for the moderate benefit of just one of me. The main reason I’d have reservations about releasing myself on the internet for anyone to download would be because they could learn how to manipulate me. The main problem I have with slavery and starvation is that they’re a waste of human resources, and that monolithic power structures are brittle against black swans. In short, I don’t consider it a moral issue what algorithm is computed to produce a particular result.
I’m not sure how to formalize this properly.