Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
Why? Even within just one copy of Earth, the program that finds Felix should be much shorter than any program that finds a human mind...
Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)