Interesting, although I’m not sure what you say follows:
As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses. Alpha, being the clauses-to-variables ratio does not increase solely because the clauses increase, since the variables are randomly filled in as well.
Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an “intuition”?
As the number of clauses increases, so should the number of variables (randomly chosen).
Why should it? You didn’t say anything about that.
Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an “intuition”?
I don’t think there’s a difference between heuristics and intuitions, or rather, I think many intuitions may just be how heuristics feel on the inside.
As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses.
For any particular 3-SAT instance, you can calculate α. If the random algorithm that generated it doesn’t favor any particular clause types, then the distribution of outputs with c clauses and v variables should be uniform among all such 3-SAT instances, and therefore we can use α to figure out if the formula is (most likely) satisfiable.
Furthermore, unless you deliberately try to generate random instances with a particular value of α, I imagine under most models the value of α will be (with high probability) well on one side of the threshold, and so the probability that your formula is satisfiable will be either very nearly 1 or very nearly 0, depending on the random model.
Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?
The question is not whether—faced with a typically very, very large (if randomly chosen) number of clauses—you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.
Not whether they can be resolved, or whether there are neat tricks for certain subclasses.
Of course if the initial use of “intuition” that sparked this debate is used synonymously with “heuristic”, the point is moot. I was referring to the subset that is more typically referred to as human intuitions.
Interesting, although I’m not sure what you say follows:
As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses. Alpha, being the clauses-to-variables ratio does not increase solely because the clauses increase, since the variables are randomly filled in as well.
Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an “intuition”?
Why should it? You didn’t say anything about that.
I don’t think there’s a difference between heuristics and intuitions, or rather, I think many intuitions may just be how heuristics feel on the inside.
For any particular 3-SAT instance, you can calculate α. If the random algorithm that generated it doesn’t favor any particular clause types, then the distribution of outputs with c clauses and v variables should be uniform among all such 3-SAT instances, and therefore we can use α to figure out if the formula is (most likely) satisfiable.
Furthermore, unless you deliberately try to generate random instances with a particular value of α, I imagine under most models the value of α will be (with high probability) well on one side of the threshold, and so the probability that your formula is satisfiable will be either very nearly 1 or very nearly 0, depending on the random model.
Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?
The question is not whether—faced with a typically very, very large (if randomly chosen) number of clauses—you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.
Not whether they can be resolved, or whether there are neat tricks for certain subclasses.
Of course if the initial use of “intuition” that sparked this debate is used synonymously with “heuristic”, the point is moot. I was referring to the subset that is more typically referred to as human intuitions.
(I should’ve said k-sat anyways.)