as an aside, does the P vs NP distinction even matter in practice?
Yes, it does, for several reasons:
It basically is necessary to prove P != NP to get a lot of other results to work, and for some of those results, proving P != NP is sufficient.
If P != NP (As most people suspect), it fundamentally rules out solving lots of problems generally and quickly without exploiting structure, and in particular lets me flip the burden of proof to the algorithm maker to explain why their solution to a problem like SAT is efficient, rather than me having to disprove the existence of an efficient algorithm.
It’s either by exploiting structure, somehow having a proof that P=NP, or relying on new physics models that enable computing NP-complete problems efficiently, and the latter 2 need very, very strong evidence behind them.
This in particular applies to basically all learning problems in AI today.
It explains why certain problems cannot be reasonably solved optimally, without huge discoveries, and the best examples are travelling salesman problems for inability to optimally solve, as well as a whole lot of other NP-complete problems. There are also other NP problems where there isn’t a way to solve them efficiently at all, especially if FPT != W[1] holds.
Also a note that we also expect a lot of NP-complete problems to also not be solvable by fast algorithms even in the average case, which basically means it’s likely to be very relevant quite a lot of the time, so we don’t have to limit ourselves to the worst case either.
Yes, it does, for several reasons:
It basically is necessary to prove P != NP to get a lot of other results to work, and for some of those results, proving P != NP is sufficient.
If P != NP (As most people suspect), it fundamentally rules out solving lots of problems generally and quickly without exploiting structure, and in particular lets me flip the burden of proof to the algorithm maker to explain why their solution to a problem like SAT is efficient, rather than me having to disprove the existence of an efficient algorithm.
It’s either by exploiting structure, somehow having a proof that P=NP, or relying on new physics models that enable computing NP-complete problems efficiently, and the latter 2 need very, very strong evidence behind them.
This in particular applies to basically all learning problems in AI today.
It explains why certain problems cannot be reasonably solved optimally, without huge discoveries, and the best examples are travelling salesman problems for inability to optimally solve, as well as a whole lot of other NP-complete problems. There are also other NP problems where there isn’t a way to solve them efficiently at all, especially if FPT != W[1] holds.
Also a note that we also expect a lot of NP-complete problems to also not be solvable by fast algorithms even in the average case, which basically means it’s likely to be very relevant quite a lot of the time, so we don’t have to limit ourselves to the worst case either.