Well, I’m not buying K-complexity goal in particular, which is why I said only “perfection”; I’m making a different point. The thing about goals is that they are not up for grabs, they can’t in themselves be foolish, only actions or subgoals can be foolish. Foolishness must follow from incongruity with some higher goal (that cares nothing for efficiency or probability of success, mere instrumental drives), so if one’s goal is to optimize some hard-to-optimize quality whose level of optimality is also hard to gauge, that’s still what one should do, if only by taking hard-to-arrange accidental opportunities for improvement.
I guess we’re talking past each other then, because I (plausibly, I think, given the context) took your original reply to still refer to the Kolmogorov complexity goal. My beef is with that particular formulation, because I find it sometimes to be illegitimately overused for (what amounts to merely) emotional effect. I’m all for working on optimizing imperfectly-defined, hard-to-pin-down goals! Been doing that for a while with my life.
(the results are mixed)
The hypothetical still applies, I think. Suppose minimizing K-complexity happened to be one’s goal, then there are probably some steps that can be taken in its pursuit, and in any case it wouldn’t be right to call it “foolish” if it’s indeed the goal, even in the unlikely situation where nothing whatsoever can predictably advance it (maybe one should embark on a quest to find a Turing oracle or something). It might be foolish to think that it’s a human (sub)goal though, where it would clash with what we actually seek.
Well, I’m not buying K-complexity goal in particular, which is why I said only “perfection”; I’m making a different point. The thing about goals is that they are not up for grabs, they can’t in themselves be foolish, only actions or subgoals can be foolish. Foolishness must follow from incongruity with some higher goal (that cares nothing for efficiency or probability of success, mere instrumental drives), so if one’s goal is to optimize some hard-to-optimize quality whose level of optimality is also hard to gauge, that’s still what one should do, if only by taking hard-to-arrange accidental opportunities for improvement.
I guess we’re talking past each other then, because I (plausibly, I think, given the context) took your original reply to still refer to the Kolmogorov complexity goal. My beef is with that particular formulation, because I find it sometimes to be illegitimately overused for (what amounts to merely) emotional effect. I’m all for working on optimizing imperfectly-defined, hard-to-pin-down goals! Been doing that for a while with my life. (the results are mixed)
The hypothetical still applies, I think. Suppose minimizing K-complexity happened to be one’s goal, then there are probably some steps that can be taken in its pursuit, and in any case it wouldn’t be right to call it “foolish” if it’s indeed the goal, even in the unlikely situation where nothing whatsoever can predictably advance it (maybe one should embark on a quest to find a Turing oracle or something). It might be foolish to think that it’s a human (sub)goal though, where it would clash with what we actually seek.