I disagree with the claim that “identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem”, at least insofar as we’re focused on problems for which current paradigms fail.
P vs NP is a good example here. How do you identify a good subproblem for P vs NP? I mean, lots of people have come up with subproblems in mathematically-straightforward ways, like the strong exponential time hypothesis or P/poly vs NP. But as far as we can tell so far, these are not very good subproblems—they are “simplifications” in name only, and whatever elements make P vs NP hard in the first place seem to be fully maintained in them. They don’t simplify the parts of the original problem which are actually hard. They’re essentially variants of the original problem, a whole cluster of problems which are probably-effectively-identical in terms of the core principles. They’re not really simplifications.
Simplifying an actually-hard part of P vs NP is very much a fuzzy conceptual problem. We have to figure out how-to-carve-up-the-problem in the right way, how to frame it so that a substantive piece can be reduced.
I suspect that your intuition that “there are way more useful and generalizable techniques for the first case than the second case” is looking at things like simplifying-P-vs-NP-to-strong-exponential-time-hypothesis, and confusing these for useful progress on the hard part of a hard problem. Something like “simplify the problem as much as possible without making it trivial” is a very useful first step, but it’s not the sort of thing which is going address the hardest part of a problem when the current paradigm fails. (After all, the current paradigm is usually what underlies our notion of “simplicity”.)
I disagree with the claim that “identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem”, at least insofar as we’re focused on problems for which current paradigms fail.
P vs NP is a good example here. How do you identify a good subproblem for P vs NP? I mean, lots of people have come up with subproblems in mathematically-straightforward ways, like the strong exponential time hypothesis or P/poly vs NP. But as far as we can tell so far, these are not very good subproblems—they are “simplifications” in name only, and whatever elements make P vs NP hard in the first place seem to be fully maintained in them. They don’t simplify the parts of the original problem which are actually hard. They’re essentially variants of the original problem, a whole cluster of problems which are probably-effectively-identical in terms of the core principles. They’re not really simplifications.
Simplifying an actually-hard part of P vs NP is very much a fuzzy conceptual problem. We have to figure out how-to-carve-up-the-problem in the right way, how to frame it so that a substantive piece can be reduced.
I suspect that your intuition that “there are way more useful and generalizable techniques for the first case than the second case” is looking at things like simplifying-P-vs-NP-to-strong-exponential-time-hypothesis, and confusing these for useful progress on the hard part of a hard problem. Something like “simplify the problem as much as possible without making it trivial” is a very useful first step, but it’s not the sort of thing which is going address the hardest part of a problem when the current paradigm fails. (After all, the current paradigm is usually what underlies our notion of “simplicity”.)