Why not? You want to let people start using words with very negative connotation to refer to whatever they want? In practice, that’s how “toxic” is used; whatever people want it to mean, typically when the thing in question seems socially inconvenient for them.
Imagine you acted in accordance with your own logic and best judgment of probabilities in the way you normally do, in a way that you see as nuanced with respect to the moral culture you have lived through and continue to live through, and suddenly someone wants to paint it as “fascist” without understanding the basis at all. Do YOU think your behavior is fascist? Probably not. Words should not have such power without robust moral reasoning driving them, otherwise it is bound to bring dogmatic irrationality to situations that robust moral reasoning would have something to say about.
It is possible that OP has an unusually operationalized usage prepared for “toxic,” but when I made my first comment on this post, I should bet against it. Related: The Anti-Jerk Law by Bryan Caplan
Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it’s to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what’s demanded. That is, if calibration can be ensured without precision, your demand won’t be met, yet the problem would be solved. Hence my objection to the demand.
If lsusr is well-calibrated in their judgment, I can only find out by hearing their operationalizing (careful reasoning (precision)), otherwise I can expect they make the same errors I typically see people making who rely heavily on judging things as toxic.
Hence “a risk, not necessarily a failure”. If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it’s an expectation, not precise knowledge, it might well be the case that there is no systematic error.
Furthermore, ensuring that there is no systematic error doesn’t require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it’s necessary to demonstrate that the problem is solved. It’s also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it’s more of a side note than the core of a plan.
Why not? You want to let people start using words with very negative connotation to refer to whatever they want? In practice, that’s how “toxic” is used; whatever people want it to mean, typically when the thing in question seems socially inconvenient for them.
Imagine you acted in accordance with your own logic and best judgment of probabilities in the way you normally do, in a way that you see as nuanced with respect to the moral culture you have lived through and continue to live through, and suddenly someone wants to paint it as “fascist” without understanding the basis at all. Do YOU think your behavior is fascist? Probably not. Words should not have such power without robust moral reasoning driving them, otherwise it is bound to bring dogmatic irrationality to situations that robust moral reasoning would have something to say about.
It is possible that OP has an unusually operationalized usage prepared for “toxic,” but when I made my first comment on this post, I should bet against it.
Related: The Anti-Jerk Law by Bryan Caplan
Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it’s to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what’s demanded. That is, if calibration can be ensured without precision, your demand won’t be met, yet the problem would be solved. Hence my objection to the demand.
If lsusr is well-calibrated in their judgment, I can only find out by hearing their operationalizing (careful reasoning (precision)), otherwise I can expect they make the same errors I typically see people making who rely heavily on judging things as toxic.
Hence “a risk, not necessarily a failure”. If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it’s an expectation, not precise knowledge, it might well be the case that there is no systematic error.
Furthermore, ensuring that there is no systematic error doesn’t require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it’s necessary to demonstrate that the problem is solved. It’s also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it’s more of a side note than the core of a plan.