I think this can be dealt with in terms of measure. In a series of articles, “Minds, Measure, Substrate and Value” I have been arguing that copies cannot be considered equally, without regard to substrate: We need to take account of measure for a mind, and the way in which the mind is implemented will affect its measure. (Incidentally, some of you argued against the series: After a long delay [years!], I will be releasing Part 4, in a while, which will deal with a lot of these objections.)
Without trying to present the full argument here, the minimum size of the algorithm that can “find” a mind by examining some physical system will determine the measure of that mind—because it will give an indication of how many other algorithms will exist that can find a mind. I think an AI would come to this view to: It would have to use some concept of measure to get coherent results: Otherwise it would be finding high measure, compressed human minds woven into Microsoft Windows (they would just need a LOT of compressing...). Compressing your mind will increase the size of the algorithm needed to find it and will reduce your measure, just as running your mind on various kinds of physical substrate would do this. Ultimately, it comes down to this:
“Compressing your mind will have an existential cost, such existential cost depending on the degree of compression.”
(Now, I just know that is going to get argued with, and the justification for it would be long. Seriously, I didn’t just make it up off the top of my head.)
When Dr Evil carries out his plan, each of the trillion minds can only be found by a decompression program and there must be at least a sufficient number of bits to distinguish one copy from another. Even ignoring the “overhead” for the mechanics of the decompression algorithm itself, the bits needed to distinguish one copy from another will have an existential cost for each copy—reducing its measure. An AI doing CEV which has a consistent approach will take this into account and regard each copy as not having as great a vote.
Another scenario, which might give some focus to all this:
What if Dr Evil decides to make one trillion identical copies and run them separately? People would disagree on whether the copies would count: I say they would and think that can be justified. However, he can now compress them and just have the one copy which “implies” the trillion. Again, issues of measure would mean that Dr Evil’s plan would have problems. You could add random bits to the finding algorithm to “find” each mind, but then you are just decreasing the measure: After all, you can do that with anyone’s brain.
That’s compression out of the way.
Another issue is that these copies will only be almost similar, and hence capable of being compressed, as long as they aren’t run for any appreciable length of time (unless you have some kind of constraint mechanism to keep them almost similar—which might be imagined—but then the AI might take that into account and not regard them as “properly formed” humans). As soon as you start running them, they will start to diverge, and compression will start to become less viable. Is the AI supposed to ignore this and look at “potential future existence for each copy”? I know someone could say that we just run them very slowly, so that while you and I have years of experience, each copy has one second of experience, so that during this time the storage requirements increase a bit, but not much. Does that second of experience get the same value in CEV? I don’t pretend to answer these last questions, but the issues are there.
That was my first reaction, but if you rely on information-theoretic measures of difference, then insane people will be weighted very heavily, while homogenous cultures will be weighted little. The basic precepts of Judaism, Christianity, and Islam might each count as one person.
Does this imply that someone could gain measure, by finding a simpler entity with volition similar to theirs and self-modifying into it or otherwise instantiating it? If so, wouldn’t that encourage people to gamble with their sanity, since verifying similarity of volition is hard, and gets harder the greater the degree of simplification?
I think this can be dealt with in terms of measure. In a series of articles, “Minds, Measure, Substrate and Value” I have been arguing that copies cannot be considered equally, without regard to substrate: We need to take account of measure for a mind, and the way in which the mind is implemented will affect its measure. (Incidentally, some of you argued against the series: After a long delay [years!], I will be releasing Part 4, in a while, which will deal with a lot of these objections.)
Without trying to present the full argument here, the minimum size of the algorithm that can “find” a mind by examining some physical system will determine the measure of that mind—because it will give an indication of how many other algorithms will exist that can find a mind. I think an AI would come to this view to: It would have to use some concept of measure to get coherent results: Otherwise it would be finding high measure, compressed human minds woven into Microsoft Windows (they would just need a LOT of compressing...). Compressing your mind will increase the size of the algorithm needed to find it and will reduce your measure, just as running your mind on various kinds of physical substrate would do this. Ultimately, it comes down to this:
“Compressing your mind will have an existential cost, such existential cost depending on the degree of compression.”
(Now, I just know that is going to get argued with, and the justification for it would be long. Seriously, I didn’t just make it up off the top of my head.)
When Dr Evil carries out his plan, each of the trillion minds can only be found by a decompression program and there must be at least a sufficient number of bits to distinguish one copy from another. Even ignoring the “overhead” for the mechanics of the decompression algorithm itself, the bits needed to distinguish one copy from another will have an existential cost for each copy—reducing its measure. An AI doing CEV which has a consistent approach will take this into account and regard each copy as not having as great a vote.
Another scenario, which might give some focus to all this:
What if Dr Evil decides to make one trillion identical copies and run them separately? People would disagree on whether the copies would count: I say they would and think that can be justified. However, he can now compress them and just have the one copy which “implies” the trillion. Again, issues of measure would mean that Dr Evil’s plan would have problems. You could add random bits to the finding algorithm to “find” each mind, but then you are just decreasing the measure: After all, you can do that with anyone’s brain.
That’s compression out of the way.
Another issue is that these copies will only be almost similar, and hence capable of being compressed, as long as they aren’t run for any appreciable length of time (unless you have some kind of constraint mechanism to keep them almost similar—which might be imagined—but then the AI might take that into account and not regard them as “properly formed” humans). As soon as you start running them, they will start to diverge, and compression will start to become less viable. Is the AI supposed to ignore this and look at “potential future existence for each copy”? I know someone could say that we just run them very slowly, so that while you and I have years of experience, each copy has one second of experience, so that during this time the storage requirements increase a bit, but not much. Does that second of experience get the same value in CEV? I don’t pretend to answer these last questions, but the issues are there.
That was my first reaction, but if you rely on information-theoretic measures of difference, then insane people will be weighted very heavily, while homogenous cultures will be weighted little. The basic precepts of Judaism, Christianity, and Islam might each count as one person.
Does this imply that someone could gain measure, by finding a simpler entity with volition similar to theirs and self-modifying into it or otherwise instantiating it? If so, wouldn’t that encourage people to gamble with their sanity, since verifying similarity of volition is hard, and gets harder the greater the degree of simplification?
I think I know what you are asking here, but I want to be sure. Could you elaborate, maybe with an example?