Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?
From the paper:
Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness. Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is good reason to be optimistic. The other sort of objection is more conceptual in nature: it is the claim that even if the technical success of the decoherence program is assumed, it will not be enough to solve the problem of indefiniteness...
So David Wallace would agree that “decoherence for free”, mapping QM onto macroscopic operations without postulating a new non-unitary rule, has not yet been established on that tiny little, nitpicky “purely technical” level. The difference is that Wallace presumably believes that success is Right Around the Corner, whereas I believe the 50 years of failure are strong evidence that the basic approach is entirely wrong. (And yes, I feel the same way about 20 years of failure in String Theory.) Time will tell.
Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?
From the paper:
Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness. Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is good reason to be optimistic. The other sort of objection is more conceptual in nature: it is the claim that even if the technical success of the decoherence program is assumed, it will not be enough to solve the problem of indefiniteness...
So David Wallace would agree that “decoherence for free”, mapping QM onto macroscopic operations without postulating a new non-unitary rule, has not yet been established on that tiny little, nitpicky “purely technical” level. The difference is that Wallace presumably believes that success is Right Around the Corner, whereas I believe the 50 years of failure are strong evidence that the basic approach is entirely wrong. (And yes, I feel the same way about 20 years of failure in String Theory.) Time will tell.