Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?
The question is not whether—faced with a typically very, very large (if randomly chosen) number of clauses—you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.
Not whether they can be resolved, or whether there are neat tricks for certain subclasses.
Of course if the initial use of “intuition” that sparked this debate is used synonymously with “heuristic”, the point is moot. I was referring to the subset that is more typically referred to as human intuitions.
Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?
The question is not whether—faced with a typically very, very large (if randomly chosen) number of clauses—you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.
Not whether they can be resolved, or whether there are neat tricks for certain subclasses.
Of course if the initial use of “intuition” that sparked this debate is used synonymously with “heuristic”, the point is moot. I was referring to the subset that is more typically referred to as human intuitions.
(I should’ve said k-sat anyways.)