I would like to say that I agree with the arguments presented in this post, even though the OP eventually retracted them. I think the arguments for why EDT leads to the wrong decision are themselves wrong.
As mentioned by others, EY referred to this argument as the ‘tickle defense’ in section 9.1 of his TDT paper. I am not defending the advocates which EY attacked, since (assuming EY hasn’t misrepresented them) they have made some mistakes of their own. In particular they argue for two-boxing.
I will start by talking about the ability to introspect. Imagine God promised Solomon that Solomon won’t be overthrown. Then the decision of weather or not to sleep with other men’s wives is easy, and Solomon can just act on his preferences. Yet if Solomon can’t introspect then in the original situation he doesn’t know weather he prefers sleeping with others’ wives or not. So Solomon not being able to introspect means that there is information that he can rationally react to in some situations and not in others. While a problems like that can occur in real people, I don’t expect a theory of rational behavior to have to deal with them. So I assume an agent knows what its preferences are, or if not fails to act on them in consistently.
In fact, the meta-tickle defense doesn’t really deal with lack of introspection either. It assumes an agent can think about an issue and ‘decide’ on it, only to not act on that decision but rather to use that ‘decision’ as information. An agent that really couldn’t introspect wouldn’t be able to do that.
The tickle defense has been used to defend two-boxing. While this argument isn’t mentioned in the paper, it is described in one of the comments here. This argument has been rebutted by the original poster AlexMennen. I would like to add to that something: For an agent to find out for sure weather it is a one-boxer or a two-boxer, the agent must make a complete simulation of itself in Newcomb’s problem. If they try to find this out as part of their strategy for Newcomb’s problem, they will get into an infinite loop.
benelliott raised a final argument here. He postulated that charisma is not related to preference for screwing wives, but rather to weather a king’s reasoning would lead them to actually do it. Here I have to question weather the hypothetical situation makes sense. For real people an intrinsic personality trait might change their bottom line conclusion, but this behavior is irrational. A ideal rational agent cannot have a trait of the form charisma is postulated to have. benelliott also left the possibility the populace have Omega-like abilities, but then situation is really just another form of Newcomb’s problem, and the rational choice is to not screw wives.
Overall I think that EDT actually does lead to rational behavior in these sorts of situations. In fact I think it is better than TDT, because TDT relies on computations with one right answer to not only have probabilities and correlations, but also on there being causality between them. I am unconvinced of this and unsatisfied with the various attempts to deal with it.
Sadly, this was in a fairly obscure post and the arguments failed to percolate to the lesswrong community.
I have made similar remarks in a comment here:
Sadly, this was in a fairly obscure post and the arguments failed to percolate to the lesswrong community.