I guess we’re talking past each other then, because I (plausibly, I think, given the context) took your original reply to still refer to the Kolmogorov complexity goal. My beef is with that particular formulation, because I find it sometimes to be illegitimately overused for (what amounts to merely) emotional effect. I’m all for working on optimizing imperfectly-defined, hard-to-pin-down goals! Been doing that for a while with my life.
(the results are mixed)
The hypothetical still applies, I think. Suppose minimizing K-complexity happened to be one’s goal, then there are probably some steps that can be taken in its pursuit, and in any case it wouldn’t be right to call it “foolish” if it’s indeed the goal, even in the unlikely situation where nothing whatsoever can predictably advance it (maybe one should embark on a quest to find a Turing oracle or something). It might be foolish to think that it’s a human (sub)goal though, where it would clash with what we actually seek.
I guess we’re talking past each other then, because I (plausibly, I think, given the context) took your original reply to still refer to the Kolmogorov complexity goal. My beef is with that particular formulation, because I find it sometimes to be illegitimately overused for (what amounts to merely) emotional effect. I’m all for working on optimizing imperfectly-defined, hard-to-pin-down goals! Been doing that for a while with my life. (the results are mixed)
The hypothetical still applies, I think. Suppose minimizing K-complexity happened to be one’s goal, then there are probably some steps that can be taken in its pursuit, and in any case it wouldn’t be right to call it “foolish” if it’s indeed the goal, even in the unlikely situation where nothing whatsoever can predictably advance it (maybe one should embark on a quest to find a Turing oracle or something). It might be foolish to think that it’s a human (sub)goal though, where it would clash with what we actually seek.