In the ideal situation, it’s important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of “no chit-chat” but add the difficulty of lying.
Yes, this allows you (in the realistic scenario) to use an “honor hack” to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.
Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
My off-the-cuff objections to “constraints” were expressed on another branch of this discussion.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
It is pretty clear that you and I have different “aesthetics” as to what counts as a “complication”.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.
In the ideal situation, it’s important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of “no chit-chat” but add the difficulty of lying.
Yes, this allows you (in the realistic scenario) to use an “honor hack” to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.
Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?
My off-the-cuff objections to “constraints” were expressed on another branch of this discussion
It is pretty clear that you and I have different “aesthetics” as to what counts as a “complication”.
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.