Felix exists as multiple copies in many universes/Everett branches, and it’s measure is the sum of the measures of the copies. Each version of mankind can only causally influence (e.g., make happier) the copy of Felix existing in the same universe/branch, and the measure of that copy of Felix shouldn’t be much higher than that of an individual human, so there’s no reason to treat Felix as a utility monster. Applying acausal reasoning doesn’t change this conclusion either. For example all the parallel versions of mankind could jointly decide to make Felix happier, but while the benefit of that is greater (all the copies of Felix existing near the parallel versions of mankind would get happier), so would the cost.
If Felix is very simple it may be deriving most of its measure from a very short program that just outputs a copy of Felix (rather than the copies existing in universes/branches containing humans), but there’s nothing humans can do to make this copy of Felix happier, so its existence doesn’t make any difference.
Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
Felix exists as multiple copies in many universes/Everett branches, and it’s measure is the sum of the measures of the copies. Each version of mankind can only causally influence (e.g., make happier) the copy of Felix existing in the same universe/branch, and the measure of that copy of Felix shouldn’t be much higher than that of an individual human, so there’s no reason to treat Felix as a utility monster. Applying acausal reasoning doesn’t change this conclusion either. For example all the parallel versions of mankind could jointly decide to make Felix happier, but while the benefit of that is greater (all the copies of Felix existing near the parallel versions of mankind would get happier), so would the cost.
If Felix is very simple it may be deriving most of its measure from a very short program that just outputs a copy of Felix (rather than the copies existing in universes/branches containing humans), but there’s nothing humans can do to make this copy of Felix happier, so its existence doesn’t make any difference.
Why? Even within just one copy of Earth, the program that finds Felix should be much shorter than any program that finds a human mind...
Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)