If you use some form of noncausal decision theory, it can make a difference.
Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.
It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it’s true or false. (Eg Alpha Go)
The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn’t get an interference pattern. Collapse theories postulate a list of circumstances that we haven’t measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.
If you believe that you can’t get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.
Another difference is that current QFT doesn’t contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn’t make a difference to an agent that could consider all possible theories.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don’t think it’s going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don’t.
An agent might care about (and acausally cooperate with) all versions of himself that “exist”. MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be—but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won’t pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
If you use some form of noncausal decision theory, it can make a difference.
Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.
It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it’s true or false. (Eg Alpha Go)
The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn’t get an interference pattern. Collapse theories postulate a list of circumstances that we haven’t measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.
If you believe that you can’t get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.
Another difference is that current QFT doesn’t contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn’t make a difference to an agent that could consider all possible theories.
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don’t think it’s going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don’t.
An agent might care about (and acausally cooperate with) all versions of himself that “exist”. MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be—but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won’t pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
(I think this is a good chance for you to think of an answer yourself.)