Kind of… but mostly I just mean if you have a problem that is easy to check if you have the answer but hard to find the answer in the first place (this is what NP-complete means) then Alice can look at the future when you found the answer and told her what it is, then tell you the answer now and you only have to check it. (locating the answer is what would make NP = P, iirc. There’s currently a proof that this is impossible in peer review.)
If there is some kind of fuzziness like I described then she might just see a piece of paper with an equation or something. If she can read text then this is a serious problem for luminosity!theoretical compute science.
It would take almost as long to check things like this with Alice as it would to manually check things alone. You’d have to make up your mind to check each possible answer individually, and then Alice could see what would happen if you went on to actually check it. It might save a little time relative to actually going through elaborate checking rigmarole, although adding a second person to the task would make it more person-hours on net.
But you can’t just make up your mind to check every possible answer, without picking out a specific one and going, “I’m planning to check this one and then stop”. That isn’t a plausible thing for you to decide to do if you actually know how hard it’s supposed to be, because there’s no way you’ll actually do it. So Alice can’t see you finding the answer.
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
But the point is that you can solve these problems, just that it takes significantly longer to solve them than it does to check the answer once you have it.
So you decide to do a brute force calculation on a computer for three days, write down the answer and tell her (a la kavitra’s comment) then she sees the answer on the paper in the future and you change your mind and just check the answer.
Because Alice sees without the prediction, there’s no need to have a stable causal loop like pengvado discusses.
So it’s mostly useful for problems of moderate solvability, that you could solve without the power, given a considerable effort (but not more effort than you would be willing to actually put forth).
For example, you could set Hashcash to mint some expensive stamp, and firmly decide to check its output in three days.
(For those who don’t want to take the time to read the link, Hashcash is a proof-of-work system based on brute-force partial preimage attacks against hash (one-way) algorithms.)
Wow, that sounds like including it might involve math. Sorry.
Kind of… but mostly I just mean if you have a problem that is easy to check if you have the answer but hard to find the answer in the first place (this is what NP-complete means) then Alice can look at the future when you found the answer and told her what it is, then tell you the answer now and you only have to check it. (locating the answer is what would make NP = P, iirc. There’s currently a proof that this is impossible in peer review.)
If there is some kind of fuzziness like I described then she might just see a piece of paper with an equation or something. If she can read text then this is a serious problem for luminosity!theoretical compute science.
It would take almost as long to check things like this with Alice as it would to manually check things alone. You’d have to make up your mind to check each possible answer individually, and then Alice could see what would happen if you went on to actually check it. It might save a little time relative to actually going through elaborate checking rigmarole, although adding a second person to the task would make it more person-hours on net.
But you can’t just make up your mind to check every possible answer, without picking out a specific one and going, “I’m planning to check this one and then stop”. That isn’t a plausible thing for you to decide to do if you actually know how hard it’s supposed to be, because there’s no way you’ll actually do it. So Alice can’t see you finding the answer.
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
But Alice’s power doesn’t work like that. It predicts the future conditional on Alice not having seen the prediction.
But the point is that you can solve these problems, just that it takes significantly longer to solve them than it does to check the answer once you have it.
So you decide to do a brute force calculation on a computer for three days, write down the answer and tell her (a la kavitra’s comment) then she sees the answer on the paper in the future and you change your mind and just check the answer.
Because Alice sees without the prediction, there’s no need to have a stable causal loop like pengvado discusses.
So it’s mostly useful for problems of moderate solvability, that you could solve without the power, given a considerable effort (but not more effort than you would be willing to actually put forth).
For example, you could set Hashcash to mint some expensive stamp, and firmly decide to check its output in three days.
(For those who don’t want to take the time to read the link, Hashcash is a proof-of-work system based on brute-force partial preimage attacks against hash (one-way) algorithms.)