I confess I do not grasp the problem well enough to see where the problem lies in my comment. I am trying to formalize the problem, and I think the formalism I describe is sensible.
Once again, I’ll reword it but I think you’ll still find it too vague : to win, one must act rationally and the set of possible action includes modifying one’s code.
The question was
My timeless decision theory only functions in cases where the other agents’ decisions can be viewed as functions of one argument, that argument being your own choice in that particular case—either by specification (as in Newcomb’s Problem) or by symmetry (as in the Prisoner’s Dilemma). If their decision is allowed to depend on how your decision depends on their decision—like saying, “I’ll cooperate, not ‘if the other agent cooperates’, but only if the other agent cooperates if and only if I cooperate—if I predict the other agent to cooperate unconditionally, then I’ll just defect”—then in general I do not know how to resolve the resulting infinite regress of conditionality, except in the special case of predictable symmetry
I do not know the specifics of Eliezer’s timeless decision theory, but it seems to me that if one looks at the decision process of other based on their belief of your code, not on your decisions, there is no infinite regression progress.
You could say : Ah but there is your belief about an agent’s code, then his belief about your belief about his code, then your belief about his belief about your belief about his code, and that looks like an infinite regression. However, there is really no regression since “his belief about your belief about his code” is entirely contained in “your belief about his code”.
Again, if you can state same with precision, it could be valuable, while on this level my reply is “So?”.
I confess I do not grasp the problem well enough to see where the problem lies in my comment. I am trying to formalize the problem, and I think the formalism I describe is sensible.
Once again, I’ll reword it but I think you’ll still find it too vague : to win, one must act rationally and the set of possible action includes modifying one’s code.
The question was
I do not know the specifics of Eliezer’s timeless decision theory, but it seems to me that if one looks at the decision process of other based on their belief of your code, not on your decisions, there is no infinite regression progress.
You could say : Ah but there is your belief about an agent’s code, then his belief about your belief about his code, then your belief about his belief about your belief about his code, and that looks like an infinite regression. However, there is really no regression since “his belief about your belief about his code” is entirely contained in “your belief about his code”.
Thanks, this comment makes your point clearer. See cousin_it’s post Re-formalizing PD.