I dislike all examples involving omniscient beings.
I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
The only thing Omega uses its omniscience for is to detect if you’re lying, so if humans are bad at convincing lying you don’t need omniscience.
Also, “prefer to assume” indicates extreme irrationallity, you can’t be rational if you are choosing what to believe based on anything other than the evidence, see Robin Hanson’s post You Are Never Entitled to Your Opinion. Of course you probably didn’t mean that, you probably just meant:
Natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
As I have answered repeatedly on this thread, when I said “prefer to assume”, I actually meant “prefer to assume”. If you are interpreting that as “prefer to believe” you are not reading carefully enough.
One makes (sometimes fictional) assumptions when constructing a model. One is only irrational when one imagines that a model represents reality.
If it makes you happy, insert a link to some profundity by Eliezer about maps and territories at this point in my reply.
The only thing Omega uses its omniscience for is to detect if you’re lying...
If I understand the OP correctly, it is important to him that this example not include
any chit-chat between the hitchhiker and Omega. So what Omega actually detects is propensity to pay, not lying.
In the ideal situation, it’s important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of “no chit-chat” but add the difficulty of lying.
Yes, this allows you (in the realistic scenario) to use an “honor hack” to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.
Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
My off-the-cuff objections to “constraints” were expressed on another branch of this discussion.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
It is pretty clear that you and I have different “aesthetics” as to what counts as a “complication”.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.
The only thing Omega uses its omniscience for is to detect if you’re lying, so if humans are bad at convincing lying you don’t need omniscience.
Also, “prefer to assume” indicates extreme irrationallity, you can’t be rational if you are choosing what to believe based on anything other than the evidence, see Robin Hanson’s post You Are Never Entitled to Your Opinion. Of course you probably didn’t mean that, you probably just meant:
Say what you mean, otherwise you end up with Belief in Belief.
As I have answered repeatedly on this thread, when I said “prefer to assume”, I actually meant “prefer to assume”. If you are interpreting that as “prefer to believe” you are not reading carefully enough.
One makes (sometimes fictional) assumptions when constructing a model. One is only irrational when one imagines that a model represents reality.
If it makes you happy, insert a link to some profundity by Eliezer about maps and territories at this point in my reply.
Heh, serve me right for not paying attention.
If I understand the OP correctly, it is important to him that this example not include any chit-chat between the hitchhiker and Omega. So what Omega actually detects is propensity to pay, not lying.
Minor point.
In the ideal situation, it’s important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of “no chit-chat” but add the difficulty of lying.
Yes, this allows you (in the realistic scenario) to use an “honor hack” to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.
Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?
My off-the-cuff objections to “constraints” were expressed on another branch of this discussion
It is pretty clear that you and I have different “aesthetics” as to what counts as a “complication”.
The answers determine whether you’re trying to make your own decision theory reflectively consistent, or looking at someone else’s. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what’s important is that SAMELs’ explanatory power be used at all.
Like I said in my first reply to you, the revealed preferences don’t uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.
Well, you’re entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone’s actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you’d have to make anyway.