If you, however, say something like “I don’t prefer A less than B, nor more than B, nor equally to B”, I’m just going to give you a very stern look until you realize you’re rather confused.
The simplest way to show the confusion here is just to ask: if those are your only two options (or both obviously dominate all other options – say, Omega will blow up the world if you don’t pick one, though it might be best to avoid invoking Omega when you can avoid it), how do you choose?
What is invalid with answering? “By performing further computation and evidence gathering.”
And if Omega doesn’t give that option, then that significantly changes the state of the world, and hence your priority function—including the priority of assigning the relative priority between A and B.
As I said on the top level post, you can’t treat this priority assignment as non-self-referential.
Edited to add: You should not call people confused because they don’t want to cache thoughts for which they do not as of yet know the answer.
I don’t believe any of my own preferences are final and stable. The intent is to characterize the structure that (I believe) an idealized agent would have / that the output of my morality has / that I aim at (without necessarily ever reaching).
The simplest way to show the confusion here is just to ask: if those are your only two options (or both obviously dominate all other options – say, Omega will blow up the world if you don’t pick one, though it might be best to avoid invoking Omega when you can avoid it), how do you choose?
What is invalid with answering? “By performing further computation and evidence gathering.”
And if Omega doesn’t give that option, then that significantly changes the state of the world, and hence your priority function—including the priority of assigning the relative priority between A and B.
As I said on the top level post, you can’t treat this priority assignment as non-self-referential.
Edited to add: You should not call people confused because they don’t want to cache thoughts for which they do not as of yet know the answer.
Yes, the question only applies to final, stable preferences – but AFAIK not everyone agrees that final, stable preferences should be totally ordered.
How do you conclude that a preference is final and stable?
That seems an extremely strong statement to be making about the inner workings of your own mind.
I don’t believe any of my own preferences are final and stable. The intent is to characterize the structure that (I believe) an idealized agent would have / that the output of my morality has / that I aim at (without necessarily ever reaching).