This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.
Obviously, as evident by my writing here, I do not solely focus all my life’s efforts on my top tier values, even though I claim they outweigh any combination of other values.
So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:
Are my supposed top tier terminal values in fact outweigh-able by others, with “just” a very large conversion coefficient?
or
Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that’s valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.
This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.
Another case of “do as I say (I’d do in hypothetical scenarios), not as I do (in daily life)” …
This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
Even that would be equivalent to an expected utility maximizer using just real numbers, except that there’s a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.
I would like to point out that there is a known bias interfering with said hypothetical scenarios. It’s called “taboo tradeoffs” or “sacred values”, and it’s touched upon here; I don’t think there’s any post that focuses on explaining what it is and how to avoid it, though. One of the more interesting biases, I think.
Of course, your actual preferences could mirror the bias, in this case; lets not fall prey to the fallacy fallacy ;)
A strong argument, well done.
This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.
Obviously, as evident by my writing here, I do not solely focus all my life’s efforts on my top tier values, even though I claim they outweigh any combination of other values.
So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:
Are my supposed top tier terminal values in fact outweigh-able by others, with “just” a very large conversion coefficient?
or
Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that’s valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.
This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.
Another case of “do as I say (I’d do in hypothetical scenarios), not as I do (in daily life)” …
Well, you could always play with some fun math…
Even that would be equivalent to an expected utility maximizer using just real numbers, except that there’s a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.
How often do two options have precisely the same expected utility? Not often, I’m guessing. Especially in the real world.
I guess almost never (in the mathematical sense). OTOH, in the real world the difference is often so tiny that it’s hard to tell its sign—but then, the thing to do is gather more information or flip a coin.
I would like to point out that there is a known bias interfering with said hypothetical scenarios. It’s called “taboo tradeoffs” or “sacred values”, and it’s touched upon here; I don’t think there’s any post that focuses on explaining what it is and how to avoid it, though. One of the more interesting biases, I think.
Of course, your actual preferences could mirror the bias, in this case; lets not fall prey to the fallacy fallacy ;)