I am curious as to how often the asymptotic results proven using features of the problem that seem basically practically-irrelevant become relevant in practice.
Like, I understand that there are many asymptotic results (e.g., free energy principle in SLT) that are useful in practice, but i feel like there’s something sus about similar results from information theory or complexity theory where the way in which they prove certain bounds (or inclusion relationship, for complexity theory) seem totally detached from practicality?
joint source coding theorem is often stated as why we can consider the problem of compression and redundancy separately, but when you actually look at the proof it only talks about possibility (which is proven in terms of insanely long codes) and thus not-at-all trivial that this equivalence is something that holds in the context of practical code-engineering
complexity theory talks about stuff like quantifying some property over all possible boolean circuits of a given size which seems to me considering a feature of the problem just so utterly irrelevant to real programs that I’m suspicious it can say meaningful things about stuff we see in practice
as an aside, does the P vs NP distinction even matter in practice? we just … seem to have very good approximation to NP problems by algorithms that take into account the structures specific to the problem and domains where we want things to be fast; and as long as complexity methods doesn’t take into account those fine structures that are specific to a problem, i don’t see how it would characterize such well-approximated problems using complexity classes.
Wigderson’s book had a short section on average complexity which I hoped would be this kind of a result, and I’m unimpressed (the problem doesn’t sound easier—now how do you specify the natural distribution??)
One result to mention in computational complexity is the PCP theorem which not only gives probabilistically checkable proofs but also gives approximation case hardness. Seems deep but I haven’t understood the proof yet.
Great question. I don’t have a satisfying answer. Perhaps a cynical answer is survival bias—we remember the asymptotic results that eventually become relevant (because people develop practical algorithms or a deeper theory is discovered) but don’t remember the irrelevant ones.
Existence results are categorically easier to prove than explicit algorithms. Indeed, classical existence may hold (the former) while intuitioinistically (the latter) might not. We would expect non-explicit existence results to appear before explicit algorithms.
One minor remark on ‘quantifying over all boolean algorithms’. Unease with quantification over large domains may be a vestige of set-theoretic thinking that imagines types as (platonic) boxes. But a term of a for-all quantifier is better thought of as an algorithm/ method to check the property for any given term (in this case a Boolean circuit). This doesn’t sound divorced from practice to my ears.
as an aside, does the P vs NP distinction even matter in practice?
Yes, it does, for several reasons:
It basically is necessary to prove P != NP to get a lot of other results to work, and for some of those results, proving P != NP is sufficient.
If P != NP (As most people suspect), it fundamentally rules out solving lots of problems generally and quickly without exploiting structure, and in particular lets me flip the burden of proof to the algorithm maker to explain why their solution to a problem like SAT is efficient, rather than me having to disprove the existence of an efficient algorithm.
It’s either by exploiting structure, somehow having a proof that P=NP, or relying on new physics models that enable computing NP-complete problems efficiently, and the latter 2 need very, very strong evidence behind them.
This in particular applies to basically all learning problems in AI today.
It explains why certain problems cannot be reasonably solved optimally, without huge discoveries, and the best examples are travelling salesman problems for inability to optimally solve, as well as a whole lot of other NP-complete problems. There are also other NP problems where there isn’t a way to solve them efficiently at all, especially if FPT != W[1] holds.
Also a note that we also expect a lot of NP-complete problems to also not be solvable by fast algorithms even in the average case, which basically means it’s likely to be very relevant quite a lot of the time, so we don’t have to limit ourselves to the worst case either.
I am curious as to how often the asymptotic results proven using features of the problem that seem basically practically-irrelevant become relevant in practice.
Like, I understand that there are many asymptotic results (e.g., free energy principle in SLT) that are useful in practice, but i feel like there’s something sus about similar results from information theory or complexity theory where the way in which they prove certain bounds (or inclusion relationship, for complexity theory) seem totally detached from practicality?
joint source coding theorem is often stated as why we can consider the problem of compression and redundancy separately, but when you actually look at the proof it only talks about possibility (which is proven in terms of insanely long codes) and thus not-at-all trivial that this equivalence is something that holds in the context of practical code-engineering
complexity theory talks about stuff like quantifying some property over all possible boolean circuits of a given size which seems to me considering a feature of the problem just so utterly irrelevant to real programs that I’m suspicious it can say meaningful things about stuff we see in practice
as an aside, does the P vs NP distinction even matter in practice? we just … seem to have very good approximation to NP problems by algorithms that take into account the structures specific to the problem and domains where we want things to be fast; and as long as complexity methods doesn’t take into account those fine structures that are specific to a problem, i don’t see how it would characterize such well-approximated problems using complexity classes.
Wigderson’s book had a short section on average complexity which I hoped would be this kind of a result, and I’m unimpressed (the problem doesn’t sound easier—now how do you specify the natural distribution??)
P v NP: https://en.wikipedia.org/wiki/Generic-case_complexity
One result to mention in computational complexity is the PCP theorem which not only gives probabilistically checkable proofs but also gives approximation case hardness. Seems deep but I haven’t understood the proof yet.
Great question. I don’t have a satisfying answer. Perhaps a cynical answer is survival bias—we remember the asymptotic results that eventually become relevant (because people develop practical algorithms or a deeper theory is discovered) but don’t remember the irrelevant ones.
Existence results are categorically easier to prove than explicit algorithms. Indeed, classical existence may hold (the former) while intuitioinistically (the latter) might not. We would expect non-explicit existence results to appear before explicit algorithms.
One minor remark on ‘quantifying over all boolean algorithms’. Unease with quantification over large domains may be a vestige of set-theoretic thinking that imagines types as (platonic) boxes. But a term of a for-all quantifier is better thought of as an algorithm/ method to check the property for any given term (in this case a Boolean circuit). This doesn’t sound divorced from practice to my ears.
Yes, it does, for several reasons:
It basically is necessary to prove P != NP to get a lot of other results to work, and for some of those results, proving P != NP is sufficient.
If P != NP (As most people suspect), it fundamentally rules out solving lots of problems generally and quickly without exploiting structure, and in particular lets me flip the burden of proof to the algorithm maker to explain why their solution to a problem like SAT is efficient, rather than me having to disprove the existence of an efficient algorithm.
It’s either by exploiting structure, somehow having a proof that P=NP, or relying on new physics models that enable computing NP-complete problems efficiently, and the latter 2 need very, very strong evidence behind them.
This in particular applies to basically all learning problems in AI today.
It explains why certain problems cannot be reasonably solved optimally, without huge discoveries, and the best examples are travelling salesman problems for inability to optimally solve, as well as a whole lot of other NP-complete problems. There are also other NP problems where there isn’t a way to solve them efficiently at all, especially if FPT != W[1] holds.
Also a note that we also expect a lot of NP-complete problems to also not be solvable by fast algorithms even in the average case, which basically means it’s likely to be very relevant quite a lot of the time, so we don’t have to limit ourselves to the worst case either.