Can you formalize the idea of “copying” and show why expected utility maximization fails once I have “copied” myself? I think I understand why Newcomb’s problem is interesting and significant, but in terms of an AI rewriting its source code… well, my brain is changing all the time and I don’t think I have any problems with expected utility maximization.
We can formalize “copying” by using information sets that include more than one node, as I tried to do in this post. Expected utility maximization fails on such problems because your subjective probability of being at a certain node might depend on the action you’re about to take, as mentioned in this thread.
The Absent-Minded Driver problem is an example of such dependence, because your subjective probability of being at the second intersection depends on your choosing to go straight at the first intersection, and the two intersections are indistinguishable to you.
Can you formalize the idea of “copying” and show why expected utility maximization fails once I have “copied” myself? I think I understand why Newcomb’s problem is interesting and significant, but in terms of an AI rewriting its source code… well, my brain is changing all the time and I don’t think I have any problems with expected utility maximization.
We can formalize “copying” by using information sets that include more than one node, as I tried to do in this post. Expected utility maximization fails on such problems because your subjective probability of being at a certain node might depend on the action you’re about to take, as mentioned in this thread.
The Absent-Minded Driver problem is an example of such dependence, because your subjective probability of being at the second intersection depends on your choosing to go straight at the first intersection, and the two intersections are indistinguishable to you.