There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don’t think it’s going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don’t.
An agent might care about (and acausally cooperate with) all versions of himself that “exist”. MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be—but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won’t pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don’t think it’s going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don’t.
An agent might care about (and acausally cooperate with) all versions of himself that “exist”. MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be—but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won’t pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
(I think this is a good chance for you to think of an answer yourself.)