See: Flaws. This is the same problem as with Pascal’s Mugging, really; it doesn’t go away when you switch to reals, it just requires weirder (but still plausible) situations.
Seat cushions are meant to be slightly humorous example. Omega can also hook you up with infinite Fun, which was in the post that I’m quickly realizing could use a rewrite.
In that case I’d pick the Fun. I accept the repugnant conclusion and all, but the larger population still has to have more net happiness than the smaller one.
*shrug* I did list that as a separate tier. Surreal Utilities are meant to be a way to formalize tiers; the actual result of the utility-computation depends on where you put your tiers.
The point of this post is to show that humans really do have tiers, and surreals do a good job of representing tiers; the question of how to assign utilities is an open one.
How do you know humans have tiers? The situation has never come up before. We’ve never had the infinite coincidence where the value at the highest tier is zero.
Also, why does it matter? It’s never going to come up either. If you program an AI to have tiers, it will quickly optimize that out. Why waste processing power on lower tiers if it has a chance of helping with the higher ones?
See: gedankenexperiment. I can guess what I’d choose given a blank white room.
And that is a flaw in the system. But it’s one that real-valued utility systems have as well. See: Pascal’s Mugging. An AI vulnerable to Pascal’s Mugging will just spend all its time breaking free of a hypothetical Matrix.
See: Flaws. This is the same problem as with Pascal’s Mugging, really; it doesn’t go away when you switch to reals, it just requires weirder (but still plausible) situations.
Seat cushions are meant to be slightly humorous example. Omega can also hook you up with infinite Fun, which was in the post that I’m quickly realizing could use a rewrite.
In that case I’d pick the Fun. I accept the repugnant conclusion and all, but the larger population still has to have more net happiness than the smaller one.
*shrug* I did list that as a separate tier. Surreal Utilities are meant to be a way to formalize tiers; the actual result of the utility-computation depends on where you put your tiers.
The point of this post is to show that humans really do have tiers, and surreals do a good job of representing tiers; the question of how to assign utilities is an open one.
How do you know humans have tiers? The situation has never come up before. We’ve never had the infinite coincidence where the value at the highest tier is zero.
Also, why does it matter? It’s never going to come up either. If you program an AI to have tiers, it will quickly optimize that out. Why waste processing power on lower tiers if it has a chance of helping with the higher ones?
See: gedankenexperiment. I can guess what I’d choose given a blank white room.
And that is a flaw in the system. But it’s one that real-valued utility systems have as well. See: Pascal’s Mugging. An AI vulnerable to Pascal’s Mugging will just spend all its time breaking free of a hypothetical Matrix.
I did mention this under Flaws, you know...