Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
But Alice’s power doesn’t work like that. It predicts the future conditional on Alice not having seen the prediction.