Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
My off-the-cuff objections to “constraints” were expressed on another branch of this discussion.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
It is pretty clear that you and I have different “aesthetics” as to what counts as a “complication”.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.