For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.