The rule “90% of everything is garbage” applies, but recent moral values are rejecting any sorts of hierarchies, even between functional and dysfunctional countries, cultures, cities, religions, values, etc.
When society suppresses attempts to evaluate concepts or situations as objectively better or worse than alternatives, is it any surprise that polarization increases? If there are no commonly agreed upon benchmarks to calibrate against it becomes a war of whoever can shout loudest/most convincingly.
I find that subjective measurements are punished harder than objective ones. You are sometimes forgiven for claiming that “science shows X”, but personal opinions are rarely allowed to discriminate, even if they, by their very nature, and meant to do exactly that. Example: “I want to date X type of people” or “I wouldn’t date X type of people”. For almost every category of X, you’ll be judged hard for your preferences, even if you didn’t consciously choose any of them.
I don’t think it’s just about shouting the loudest or most convincingly. At least I want to stress that what counts as “convincing” is more emotional than rational, in all cases where the rational is less pleasant to the ear. Some people can see through this and side with the truth, but I think the ratio of them is too small to counter the effect.
Since this is mostly about value, objectivity can’t help us. Even if it could (through agreement about metrics), the relationships of real-world data is too complex. War feels terrible, yet it’s great for technological advancements. “War is good” is not a common opinion at all, it lost, and the positive effects are rarely even considered. Society tends to think of things as either entirely good or entirely bad, but if you consider 3 or 4 links of cause and effect, such thinking becomes useless. But society generally doesn’t look that far, and neither does it like people who do. People who look that far ahead will advocate for terrible things now to bring about good things later (accelerationism, revolution, eugenics, etc). But it will happily make the locally best choice even when it’s completely unsustainable.
Anyway—I think making the correct choice requires some willpower, for the same reason that it requires willpower to eat salad rather than a burger. But the average person, to the extent that they’re “moral”, tends to be weak. No willpower, no backbone, no abiliy to resist temptation, conflict-shy, afraid to assert themselves. Stronger people suffer from this effect, for they can either make the worse choice, or get called “evil” for making the better choice. To use an example which may be familiar to you, how do you save somebody who is addicted to something harmful or procrastinating important work? You either aid their destruction, or take their pleasure away from them, and both choices are painful.
You’re right, “objectively” doesn’t fit as well in that statement as I thought.
That is how I intended ‘convincing’ to be interpreted.
For almost every category of X, you’ll be judged hard for your preferences, even if you didn’t consciously choose any of them.
It depends on if X is a demographic/group or a variable. “I don’t want to date people who are [uneducated/from a drastically different cultural background]” sounds a lot less politically correct than “I want to date people with whom I estimate a high probability of mutual relationship satisfaction.” because you don’t have to explain your criteria to everyone. I admit that’s more semantic obfuscation of judgement risk markers than it is mitigating the problem.
It does depend how you explain yourself, but in the end, you’re just wording the same thing (the same preference) differently, and that’s still assuming that you know the reason of your own preference, and that they have a reason.
The logic seems to be “when the truth looks bad, it is, therefore you must pretend otherwise”, which adds a useless layer on top of everything obscuring the truth. The truth isn’t always more valuable than pleasant lies, but when this constructed social reality starts influencing areas in which it does matter (like medicine, general science and ways of doing things, like parenting), I find that it’s harmful.
I’ll also admit that I don’t find preferences to be a problem at all. Even though most preferences are shallow (occuring before conscious thought). I think both lying about them and inferring something from them is more harmful. All this perceived intent where none exists is what causes aspects of life to be so unappealing. I find most peoples perceptions to be unhealthy, by which I mean lacking in innocence, resulting in a sort of oversensitivity or tendency to project or interpret negative signals.
This is sort of abstract, but if we assume that racism is solved by not seeing color, then moral evil can be solved by not looking at the world through such a lens. Favorable and unfavorable outcomes will still exist, the dimension of “pure/corrupt” feelings associated with things will just disappear. This may be throwing out the baby with the bathwater though.
When society suppresses attempts to evaluate concepts or situations as objectively better or worse than alternatives, is it any surprise that polarization increases?
If there are no commonly agreed upon benchmarks to calibrate against it becomes a war of whoever can shout loudest/most convincingly.
I find that subjective measurements are punished harder than objective ones. You are sometimes forgiven for claiming that “science shows X”, but personal opinions are rarely allowed to discriminate, even if they, by their very nature, and meant to do exactly that. Example: “I want to date X type of people” or “I wouldn’t date X type of people”. For almost every category of X, you’ll be judged hard for your preferences, even if you didn’t consciously choose any of them.
I don’t think it’s just about shouting the loudest or most convincingly. At least I want to stress that what counts as “convincing” is more emotional than rational, in all cases where the rational is less pleasant to the ear. Some people can see through this and side with the truth, but I think the ratio of them is too small to counter the effect.
Since this is mostly about value, objectivity can’t help us. Even if it could (through agreement about metrics), the relationships of real-world data is too complex. War feels terrible, yet it’s great for technological advancements. “War is good” is not a common opinion at all, it lost, and the positive effects are rarely even considered. Society tends to think of things as either entirely good or entirely bad, but if you consider 3 or 4 links of cause and effect, such thinking becomes useless. But society generally doesn’t look that far, and neither does it like people who do. People who look that far ahead will advocate for terrible things now to bring about good things later (accelerationism, revolution, eugenics, etc). But it will happily make the locally best choice even when it’s completely unsustainable.
Anyway—I think making the correct choice requires some willpower, for the same reason that it requires willpower to eat salad rather than a burger. But the average person, to the extent that they’re “moral”, tends to be weak. No willpower, no backbone, no abiliy to resist temptation, conflict-shy, afraid to assert themselves. Stronger people suffer from this effect, for they can either make the worse choice, or get called “evil” for making the better choice. To use an example which may be familiar to you, how do you save somebody who is addicted to something harmful or procrastinating important work? You either aid their destruction, or take their pleasure away from them, and both choices are painful.
You’re right, “objectively” doesn’t fit as well in that statement as I thought.
That is how I intended ‘convincing’ to be interpreted.
It depends on if X is a demographic/group or a variable. “I don’t want to date people who are [uneducated/from a drastically different cultural background]” sounds a lot less politically correct than “I want to date people with whom I estimate a high probability of mutual relationship satisfaction.” because you don’t have to explain your criteria to everyone.
I admit that’s more semantic obfuscation of judgement risk markers than it is mitigating the problem.
I see! I think we largely agree then.
It does depend how you explain yourself, but in the end, you’re just wording the same thing (the same preference) differently, and that’s still assuming that you know the reason of your own preference, and that they have a reason.
The logic seems to be “when the truth looks bad, it is, therefore you must pretend otherwise”, which adds a useless layer on top of everything obscuring the truth. The truth isn’t always more valuable than pleasant lies, but when this constructed social reality starts influencing areas in which it does matter (like medicine, general science and ways of doing things, like parenting), I find that it’s harmful.
I’ll also admit that I don’t find preferences to be a problem at all. Even though most preferences are shallow (occuring before conscious thought). I think both lying about them and inferring something from them is more harmful. All this perceived intent where none exists is what causes aspects of life to be so unappealing. I find most peoples perceptions to be unhealthy, by which I mean lacking in innocence, resulting in a sort of oversensitivity or tendency to project or interpret negative signals.
This is sort of abstract, but if we assume that racism is solved by not seeing color, then moral evil can be solved by not looking at the world through such a lens. Favorable and unfavorable outcomes will still exist, the dimension of “pure/corrupt” feelings associated with things will just disappear. This may be throwing out the baby with the bathwater though.