That new preference of yours still can’t distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what’s known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there’s no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.
I don’t really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn’t choose the 3rd box. Can you tell me whether you think that’s right, or wrong (meaning I can rationally choose the 3rd box), and why?
To make that example clearer, let’s say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.
What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on.
In this case I’m assuming preferences for program executions that aren’t independent of each other, so it falls into the “more generally” category.
That new preference of yours still can’t distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what’s known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there’s no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.
I don’t really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn’t choose the 3rd box. Can you tell me whether you think that’s right, or wrong (meaning I can rationally choose the 3rd box), and why?
To make that example clearer, let’s say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.
To make the example clearer, surely you would need to explain what the “” notation was supposed to mean.
It’s from this paragraph of http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ :
In this case I’m assuming preferences for program executions that aren’t independent of each other, so it falls into the “more generally” category.
Got an example?
You originally seemed to suggest that represented some set of preferences.
Now you seem to be saying that it is a bunch of vectors representing possible universes on which some unspecified utility function might operate.