For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
When you are worried about the fallacy of compression, that too many things interfere with each other when put in the same simplistic concept, remember that it’s a tradeoff: you necessarily place some not-identical things together, and necessarily become less accurate at tracking each of them than if you paid a little more attention just in this particular case. But on the overall scale, you can’t keep track of everything all the time, so whenever it’s feasible, any simplification should be welcome.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
It’s getting more and more obvious that my neurology is a significant factor, here. I deal poorly with situations with some kinds of limited context; I seem to have never developed the heuristics that most people use to make sense of them, which is a fairly common issue for autistics. I don’t make the tradeoff you suggest as often as most people do, and I do tend to juggle more bits of information at any given time, because it’s the only way I’ve found that leads me, personally, to reasonably accurate conclusions. Instances where I can meaningfully address a situation with limited context are rare enough that tools to handle them seem useless to me.
I may need to work on not generalizing from one example about this kind of thing, though, to avoid offending people if nothing else.
Interesting post, but not terribly useful at first glance—it started with what sounded like a good description of how I work, diverged from how I do things at “But we are happy using the word “good” for all of them, and it doesn’t feel like we’re using the same word in several different ways, the way it does when we use “right” to mean both “correct” and “opposite of left”.”, and wound up offering a different (though useful for dealing with others) solution to the problem than the very personally efficient one that I’ve been using for a few years now. I do actually feel the difference in the different meanings of ‘good’ (I haven’t cataloged them (I don’t see any personal usefulness in doing so—note that I don’t think in words in general), but I estimate at least half a dozen common meanings and several rarer ones), but that’s somewhat beside the point.
My fix for the presented problem involves the following heuristic: The farther from neutral my general opinion of a class of things is, the more likely it is to be incorrect in any given case. Generally, a generalized strong positive or negative opinion is a sign that I’m underinformed in some way—I’ve been getting biased information, or I haven’t noticed a type of situation where that kind of thing has a different effect than the one I’m aware of, or I haven’t noticed that it’s related to another class of things in some important way. The heuristic doesn’t disallow strong positive or negative generalized opinions altogether, but it does enforce a higher standard of proof on more extreme ones, and leads me to explore the aspects of things that are counter to my existing opinion in an attempt to reach a more neutral (and complex, which is the real goal) opinion of them. It still allows strong contextual reactions, too, which I haven’t yet seen a problem with, and which do appear to be generally useful.
Regarding the concepts of good (in the ‘opposite of evil’ sense) and evil, my apparent non-neutrality is personal (which is a kind of persistent context) - they’re more harmful than helpful in achieving the kinds of goals that I tend to be most interested in, like gaining a comprehensive understanding of real-world conflicts or coming to appropriately-supported useful conclusions about moral questions, and while they seem to be more helpful than harmful in the pursuit of other goals, like manipulating people (which I am neutral on, to a degree that most people I know find disturbing) and creating coherent communities of irrational people, I personally don’t consider those things relevant enough to sway my opinion. Disregarding the personal aspects, I think I have a near-neutral opinion of the existence of the concepts, but it’s hard to tell; I haven’t spent much time thinking about the issue on that scale.
Edit: And I believed that this group has similar-enough interests to generate the same kind of ‘personal’ context. I may have been wrong, but I thought that they were generally more harmful than helpful in solving the kinds of problems that are considered important here and by the kinds of individuals who participate here. Otherwise, I wouldn’t’ve mentioned the issue at all, like I usually don’t.
My reaction in the original comment was contextual, in both the personal sense and with regards to the type of presentation it was, which follows a very different set of heuristics than the ones I use to regulate general opinions, and allows strong reactions much more easily, but limits the effects of those reactions to the context at hand—perhaps in a much stricter way than you (plural) are assuming. I haven’t taken the time to note the presenter’s name (and I’m moderately faceblind and not good at remembering people by their voices), so even another presentation by the same person on the same topic will be completely unaffected by my reaction to this presentation.
For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
When you are worried about the fallacy of compression, that too many things interfere with each other when put in the same simplistic concept, remember that it’s a tradeoff: you necessarily place some not-identical things together, and necessarily become less accurate at tracking each of them than if you paid a little more attention just in this particular case. But on the overall scale, you can’t keep track of everything all the time, so whenever it’s feasible, any simplification should be welcome.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
It’s getting more and more obvious that my neurology is a significant factor, here. I deal poorly with situations with some kinds of limited context; I seem to have never developed the heuristics that most people use to make sense of them, which is a fairly common issue for autistics. I don’t make the tradeoff you suggest as often as most people do, and I do tend to juggle more bits of information at any given time, because it’s the only way I’ve found that leads me, personally, to reasonably accurate conclusions. Instances where I can meaningfully address a situation with limited context are rare enough that tools to handle them seem useless to me.
I may need to work on not generalizing from one example about this kind of thing, though, to avoid offending people if nothing else.
Btw, see also Yvain’s The Trouble With “Good”.
Interesting post, but not terribly useful at first glance—it started with what sounded like a good description of how I work, diverged from how I do things at “But we are happy using the word “good” for all of them, and it doesn’t feel like we’re using the same word in several different ways, the way it does when we use “right” to mean both “correct” and “opposite of left”.”, and wound up offering a different (though useful for dealing with others) solution to the problem than the very personally efficient one that I’ve been using for a few years now. I do actually feel the difference in the different meanings of ‘good’ (I haven’t cataloged them (I don’t see any personal usefulness in doing so—note that I don’t think in words in general), but I estimate at least half a dozen common meanings and several rarer ones), but that’s somewhat beside the point.
My fix for the presented problem involves the following heuristic: The farther from neutral my general opinion of a class of things is, the more likely it is to be incorrect in any given case. Generally, a generalized strong positive or negative opinion is a sign that I’m underinformed in some way—I’ve been getting biased information, or I haven’t noticed a type of situation where that kind of thing has a different effect than the one I’m aware of, or I haven’t noticed that it’s related to another class of things in some important way. The heuristic doesn’t disallow strong positive or negative generalized opinions altogether, but it does enforce a higher standard of proof on more extreme ones, and leads me to explore the aspects of things that are counter to my existing opinion in an attempt to reach a more neutral (and complex, which is the real goal) opinion of them. It still allows strong contextual reactions, too, which I haven’t yet seen a problem with, and which do appear to be generally useful.
Regarding the concepts of good (in the ‘opposite of evil’ sense) and evil, my apparent non-neutrality is personal (which is a kind of persistent context) - they’re more harmful than helpful in achieving the kinds of goals that I tend to be most interested in, like gaining a comprehensive understanding of real-world conflicts or coming to appropriately-supported useful conclusions about moral questions, and while they seem to be more helpful than harmful in the pursuit of other goals, like manipulating people (which I am neutral on, to a degree that most people I know find disturbing) and creating coherent communities of irrational people, I personally don’t consider those things relevant enough to sway my opinion. Disregarding the personal aspects, I think I have a near-neutral opinion of the existence of the concepts, but it’s hard to tell; I haven’t spent much time thinking about the issue on that scale.
Edit: And I believed that this group has similar-enough interests to generate the same kind of ‘personal’ context. I may have been wrong, but I thought that they were generally more harmful than helpful in solving the kinds of problems that are considered important here and by the kinds of individuals who participate here. Otherwise, I wouldn’t’ve mentioned the issue at all, like I usually don’t.
My reaction in the original comment was contextual, in both the personal sense and with regards to the type of presentation it was, which follows a very different set of heuristics than the ones I use to regulate general opinions, and allows strong reactions much more easily, but limits the effects of those reactions to the context at hand—perhaps in a much stricter way than you (plural) are assuming. I haven’t taken the time to note the presenter’s name (and I’m moderately faceblind and not good at remembering people by their voices), so even another presentation by the same person on the same topic will be completely unaffected by my reaction to this presentation.