One of the things impeding the many worlds vs wavefunction-collapse dialogue is that nobody seems to be able to point to a situation in which the difference clearly matters, where we would make a different decision depending on which theory we believe. If there aren’t any, pragmatism would instruct us to write the question off as meaningless.
Has anyone tried to pose a compelling thought experiment in which the difference matters?
[Question] When would an agent do something different as a result of believing the many worlds theory?
One of the things impeding the many worlds vs wavefunction-collapse dialogue is that nobody seems to be able to point to a situation in which the difference clearly matters, where we would make a different decision depending on which theory we believe. If there aren’t any, pragmatism would instruct us to write the question off as meaningless.
Has anyone tried to pose a compelling thought experiment in which the difference matters?