Newcomb’s problem is in my reading about how an agent A should decide in a contrafactual in which another agent B decides conditional on the outcome of a future decision of A.
I tried to show that it is under certain conditions (deliberate noncompliance of A) not possible for B to know A’s future decision any better than random (what—in the limit of atomic resolution scanning and practically infinite processing power—is only possible due to “quantum mumbo-jumbo”).
This is IMHO a form of “dissolving” the Question, though perhaps the meaning of “dissolving” is somewhat streched here.
This is of cause not applicable to all Newcomblike Problems—namely all these where A complies and B can gather enough data about A and processing power.
Newcomb’s problem is in my reading about how an agent A should decide in a contrafactual in which another agent B decides conditional on the outcome of a future decision of A.
I tried to show that it is under certain conditions (deliberate noncompliance of A) not possible for B to know A’s future decision any better than random (what—in the limit of atomic resolution scanning and practically infinite processing power—is only possible due to “quantum mumbo-jumbo”). This is IMHO a form of “dissolving” the Question, though perhaps the meaning of “dissolving” is somewhat streched here.
This is of cause not applicable to all Newcomblike Problems—namely all these where A complies and B can gather enough data about A and processing power.