Maybe it’s just me, but this looks like another case of overextrapolation from a community of rationalists to all of humanity. You think about all the conversations you’ve had distinguishing beliefs from values, and you figure everyone else must think that way.
In reality, people don’t normally make such a precise division. But don’t take my word for it. Go up to your random mouthbreather and try to find out how well they adhere to a value/belief distinction. Ask them whether the utility assigned to an outcome, or its probability was a bigger factor.
No one actually does those calculations consciously; if anything like it is done non-consciously, it’s extremely economical in computation.
Simple: the extraction cuts across preexisting independencies. (I don’t quite see what you refer to by “extraction”, but my answer seems general enough to cover most possibilities.)
I’m referring to the extraction that you were talking about: extracting human preference into prior and utility. Again, the question is why the necessary independence for this exists in the first place.
I was talking about extraction of prior about a narrow situation as the simple extractable aspect of preference, period. Utility is just the rest, what remains unextractable in preference.
Ok, I see. In that case, do you think there is still a puzzle to be solved, about why human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences), or not?
That’s just a different puzzle. You are asking a question about properties of human preference now, not of prior/utility separation. I don’t expect strict independence anywhere.
Independence is indifference, due to inability to see and precisely evaluate all consequences, made strict in form of probability, by decree of maximum entropy. If you know your preference about an event, but no preference/understanding on the uniform elements it consists of, you are indifferent to these elements—hence maximum entropy rule, air molecules in the room. Multiple events for which you only care in themselves, but not in the way they interact, are modeled as independent.
[W]hy human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences)[?]
Randomness is info, so of course the result will be more complex. Where you are indifferent, random choice will fill in the blanks.
It sounds like what you’re saying is that independence is a necessary consequence of our preferences having limited information. I had considered this possibility and don’t think it’s right, because I can give a set of preferences with little independence and also little information, just by choosing the preferences using a pseudorandom number generator.
I think there is still a puzzle here, why our preferences show a very specific kind of structure (non-randomness).
That new preference of yours still can’t distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what’s known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there’s no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.
I don’t really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn’t choose the 3rd box. Can you tell me whether you think that’s right, or wrong (meaning I can rationally choose the 3rd box), and why?
To make that example clearer, let’s say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.
What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on.
In this case I’m assuming preferences for program executions that aren’t independent of each other, so it falls into the “more generally” category.
But why do human preferences exhibit the (approximate) independence which allows the extraction to take place?
Simple. They don’t.
Maybe it’s just me, but this looks like another case of overextrapolation from a community of rationalists to all of humanity. You think about all the conversations you’ve had distinguishing beliefs from values, and you figure everyone else must think that way.
In reality, people don’t normally make such a precise division. But don’t take my word for it. Go up to your random mouthbreather and try to find out how well they adhere to a value/belief distinction. Ask them whether the utility assigned to an outcome, or its probability was a bigger factor.
No one actually does those calculations consciously; if anything like it is done non-consciously, it’s extremely economical in computation.
Simple: the extraction cuts across preexisting independencies. (I don’t quite see what you refer to by “extraction”, but my answer seems general enough to cover most possibilities.)
I’m referring to the extraction that you were talking about: extracting human preference into prior and utility. Again, the question is why the necessary independence for this exists in the first place.
I was talking about extraction of prior about a narrow situation as the simple extractable aspect of preference, period. Utility is just the rest, what remains unextractable in preference.
Ok, I see. In that case, do you think there is still a puzzle to be solved, about why human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences), or not?
That’s just a different puzzle. You are asking a question about properties of human preference now, not of prior/utility separation. I don’t expect strict independence anywhere.
Independence is indifference, due to inability to see and precisely evaluate all consequences, made strict in form of probability, by decree of maximum entropy. If you know your preference about an event, but no preference/understanding on the uniform elements it consists of, you are indifferent to these elements—hence maximum entropy rule, air molecules in the room. Multiple events for which you only care in themselves, but not in the way they interact, are modeled as independent.
Randomness is info, so of course the result will be more complex. Where you are indifferent, random choice will fill in the blanks.
It sounds like what you’re saying is that independence is a necessary consequence of our preferences having limited information. I had considered this possibility and don’t think it’s right, because I can give a set of preferences with little independence and also little information, just by choosing the preferences using a pseudorandom number generator.
I think there is still a puzzle here, why our preferences show a very specific kind of structure (non-randomness).
That new preference of yours still can’t distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what’s known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there’s no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.
I don’t really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn’t choose the 3rd box. Can you tell me whether you think that’s right, or wrong (meaning I can rationally choose the 3rd box), and why?
To make that example clearer, let’s say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.
To make the example clearer, surely you would need to explain what the “” notation was supposed to mean.
It’s from this paragraph of http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ :
In this case I’m assuming preferences for program executions that aren’t independent of each other, so it falls into the “more generally” category.
Got an example?
You originally seemed to suggest that represented some set of preferences.
Now you seem to be saying that it is a bunch of vectors representing possible universes on which some unspecified utility function might operate.