Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it’s to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what’s demanded. That is, if calibration can be ensured without precision, your demand won’t be met, yet the problem would be solved. Hence my objection to the demand.
If lsusr is well-calibrated in their judgment, I can only find out by hearing their operationalizing (careful reasoning (precision)), otherwise I can expect they make the same errors I typically see people making who rely heavily on judging things as toxic.
Hence “a risk, not necessarily a failure”. If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it’s an expectation, not precise knowledge, it might well be the case that there is no systematic error.
Furthermore, ensuring that there is no systematic error doesn’t require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it’s necessary to demonstrate that the problem is solved. It’s also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it’s more of a side note than the core of a plan.
Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it’s to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what’s demanded. That is, if calibration can be ensured without precision, your demand won’t be met, yet the problem would be solved. Hence my objection to the demand.
If lsusr is well-calibrated in their judgment, I can only find out by hearing their operationalizing (careful reasoning (precision)), otherwise I can expect they make the same errors I typically see people making who rely heavily on judging things as toxic.
Hence “a risk, not necessarily a failure”. If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it’s an expectation, not precise knowledge, it might well be the case that there is no systematic error.
Furthermore, ensuring that there is no systematic error doesn’t require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it’s necessary to demonstrate that the problem is solved. It’s also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it’s more of a side note than the core of a plan.