I think that’s partly because the usual computational complexity results aren’t terribly relevant here. Many NP-complete problems, while very difficult to solve exactly, are quite computationally tractable for any constant allowable error. (For example, finding a route in the traveling salesman problem that is within 1% of the best route, or any other error bound, provided you hold the error bound fixed with respect to the problem size.)
So, the fact that a proper Bayesian update on a complex network is intractable is not necessarily relevant, if you can guarantee that your approximate update has bounded error.
Also, my general sense is that bounded-error complexity proofs are much less common, and impossibility proofs for bounded error results even less so. (But it’s not something I’ve studied at all, so my confidence here is low.)
I don’t necessarily want a thorough discussion of particular results so much as an injection of the idea of computational complexity into the LW memeplex. It certainly seems more valuable than, say, quantum mechanics.
It’s more likely for this than for QM that an existing introduction to computational complexity is good enough. It would still take some searching to find a good one to recommend though!
I’m tentatively agreed. However, I’m concerned that a low-quality understanding of the limits of computational complexity would have a negative impact on discussion, because often the complexity results depend on very fragile assumptions that can be violated without significant harm (eg, replacing exact solutions with bounded-error ones).
I agree that a low-quality understanding of things has a negative impact on discussion but don’t see a reason to apply that skepticism more towards a concept that hasn’t been introduced to the LW memeplex than towards concepts that are already in it.
Hm...the set of NP-complete problems where there is a polynomial-time approximation scheme is not that large, and the approximation schemes can sometimes be pretty bad (i.e. getting answers within 1% takes time n^100). Besides, Bayesian inference is at least #P-complete, possibly P-space complete, and a 1% error in the prior blows up to arbitrarily large error in the posterior, so approximation algorithms won’t do anyways.
My general impression (having studied computer science, and complexity theory in passing, but being far from an expert) is that bounded-error approximations are not as well studied as precise solutions. I also get the impression that most of the bounded-error complexity results are results for specific approximation algorithms, not proofs about the limits of such algorithms. Are impossibility results for bounded-error approximations common?
In other words,
the set of NP-complete problems where there is a polynomial-time approximation scheme is not that large
Do you mean the set where such approximations are known, or the set where they are not known to be impossible?
I think that’s partly because the usual computational complexity results aren’t terribly relevant here. Many NP-complete problems, while very difficult to solve exactly, are quite computationally tractable for any constant allowable error. (For example, finding a route in the traveling salesman problem that is within 1% of the best route, or any other error bound, provided you hold the error bound fixed with respect to the problem size.)
So, the fact that a proper Bayesian update on a complex network is intractable is not necessarily relevant, if you can guarantee that your approximate update has bounded error.
Also, my general sense is that bounded-error complexity proofs are much less common, and impossibility proofs for bounded error results even less so. (But it’s not something I’ve studied at all, so my confidence here is low.)
I don’t necessarily want a thorough discussion of particular results so much as an injection of the idea of computational complexity into the LW memeplex. It certainly seems more valuable than, say, quantum mechanics.
It’s more likely for this than for QM that an existing introduction to computational complexity is good enough. It would still take some searching to find a good one to recommend though!
I’m tentatively agreed. However, I’m concerned that a low-quality understanding of the limits of computational complexity would have a negative impact on discussion, because often the complexity results depend on very fragile assumptions that can be violated without significant harm (eg, replacing exact solutions with bounded-error ones).
I agree that a low-quality understanding of things has a negative impact on discussion but don’t see a reason to apply that skepticism more towards a concept that hasn’t been introduced to the LW memeplex than towards concepts that are already in it.
Hm...the set of NP-complete problems where there is a polynomial-time approximation scheme is not that large, and the approximation schemes can sometimes be pretty bad (i.e. getting answers within 1% takes time n^100). Besides, Bayesian inference is at least #P-complete, possibly P-space complete, and a 1% error in the prior blows up to arbitrarily large error in the posterior, so approximation algorithms won’t do anyways.
I’m curious to learn more.
My general impression (having studied computer science, and complexity theory in passing, but being far from an expert) is that bounded-error approximations are not as well studied as precise solutions. I also get the impression that most of the bounded-error complexity results are results for specific approximation algorithms, not proofs about the limits of such algorithms. Are impossibility results for bounded-error approximations common?
In other words,
Do you mean the set where such approximations are known, or the set where they are not known to be impossible?