Posting some thoughts I wrote up when first engaging with the question for 10-15 minutes.
The questions is phrased as: How Can People Evaluate Complex Questions Consistently? I will be reading a moderate amount into this exact phrasing. Specifically that it’s specifying a project whose primary aim is increasing the consistency of answers to questions.
The projects strikes me as misguided. It seems to me definitely the case that consistency is an indicator of accuracy because if your “process” is reliably picking out a fixed situation in the world, then this process will give roughly the same answers as applied over time. Conversely, if I keep getting back disparate answers, then likely whatever answering process is being executed isn’t picking up a consistent feature of reality.
Examples: 1) I have a measuring tape and I measure my height. Each time I measure myself, my answer falls within a centimeter range. Likely I’m measuring a real thing in the world with a process that reliably detects it. We know how my brain and my height get entangled, etc. 2) We ask different philosophers about the ethics of euthanasia. They give widely varying answers for widely varying reasons. We might grant that there exists on true answer here, but that the philosophers are not all using reliably processes for accessing that true answer. Perhaps some are, but clearly not all are since they’re not converging, which makes it hard to trust any of them.
Under my picture, it really is accuracy that we care about almost all of the time. Consistency/precision is an indicator of accuracy, and lack of consistency is suggestive of lack of accuracy. If you are well entangled with a fixed thing, you should get a fixed answer. Yet, having a fixed answer is not sufficient to guarantee that you are entangled with the fixed thing of interest. (“Thing” is very broad here and includes abstract things like the output of some fixed computation, e.g. morality.)
The real solution/question then is how to increase accuracy, i.e. how to increase your entanglement with reality. Trying to increase consistency separate from accuracy (even at the expense of!) is mixing up an indicator and symptom with the thing which really matters: whether you’re actually determining how reality is.
It does seem we want a consistent process for sentencing and maybe pricing (but that’s not so much about truth as it is about “fairness” and planning where we fear that differing amounts (sentence duration) is not sourced in legitimate differences between cases. But even this could be cast in the light of accuracy too: suppose there is some “true, correct, fair” sentence for a given crime, then we want a process that actually gets that answer. If the process actually works, it will consistently return that answer which is a fixed aspect of the world. Or we might just choose the thing we want to be entangled with (our system of laws) to be a more predictable/easily-computable one.
I’ve gotten a little rambly, so let me focus again. I think consistency and precision are important indicators to pay attention to when assessing truth-seeking processes, but when it comes to making improvements, the question should be “how do I increase accuracy/entanglement?” not “how do increase consistency?” Increasing accuracy is the legitimate method by which you increase consistency. Attempting to increase consistency rather than accuracy is likely a recipe for making accuracy worse because you’re now focusing on the wrong thing.
jimrandomh correctly pointed out to be that precision can have it’s own value for various kinds of comparison. I think he’s right. If A and B are each biased estimators of ‘a’ and ‘b’ but their bias is consistent (causing lower accuracy) but their precision is high, then I can make comparisons between A/a and B/b over time and between each other in ways I can’t even if the estimators were less biased but higher noise.
Still though it’s here is that the estimator is tracking a real fixed thing.
If I were to try to improve my estimator, I’d look at the process as a whole it implements and try to improve that rather than just trying to make the answer come out the same.
Posting some thoughts I wrote up when first engaging with the question for 10-15 minutes.
The questions is phrased as: How Can People Evaluate Complex Questions Consistently? I will be reading a moderate amount into this exact phrasing. Specifically that it’s specifying a project whose primary aim is increasing the consistency of answers to questions.
The projects strikes me as misguided. It seems to me definitely the case that consistency is an indicator of accuracy because if your “process” is reliably picking out a fixed situation in the world, then this process will give roughly the same answers as applied over time. Conversely, if I keep getting back disparate answers, then likely whatever answering process is being executed isn’t picking up a consistent feature of reality.
Examples:
1) I have a measuring tape and I measure my height. Each time I measure myself, my answer falls within a centimeter range. Likely I’m measuring a real thing in the world with a process that reliably detects it. We know how my brain and my height get entangled, etc.
2) We ask different philosophers about the ethics of euthanasia. They give widely varying answers for widely varying reasons. We might grant that there exists on true answer here, but that the philosophers are not all using reliably processes for accessing that true answer. Perhaps some are, but clearly not all are since they’re not converging, which makes it hard to trust any of them.
Under my picture, it really is accuracy that we care about almost all of the time. Consistency/precision is an indicator of accuracy, and lack of consistency is suggestive of lack of accuracy. If you are well entangled with a fixed thing, you should get a fixed answer. Yet, having a fixed answer is not sufficient to guarantee that you are entangled with the fixed thing of interest. (“Thing” is very broad here and includes abstract things like the output of some fixed computation, e.g. morality.)
The real solution/question then is how to increase accuracy, i.e. how to increase your entanglement with reality. Trying to increase consistency separate from accuracy (even at the expense of!) is mixing up an indicator and symptom with the thing which really matters: whether you’re actually determining how reality is.
It does seem we want a consistent process for sentencing and maybe pricing (but that’s not so much about truth as it is about “fairness” and planning where we fear that differing amounts (sentence duration) is not sourced in legitimate differences between cases. But even this could be cast in the light of accuracy too: suppose there is some “true, correct, fair” sentence for a given crime, then we want a process that actually gets that answer. If the process actually works, it will consistently return that answer which is a fixed aspect of the world. Or we might just choose the thing we want to be entangled with (our system of laws) to be a more predictable/easily-computable one.
I’ve gotten a little rambly, so let me focus again. I think consistency and precision are important indicators to pay attention to when assessing truth-seeking processes, but when it comes to making improvements, the question should be “how do I increase accuracy/entanglement?” not “how do increase consistency?” Increasing accuracy is the legitimate method by which you increase consistency. Attempting to increase consistency rather than accuracy is likely a recipe for making accuracy worse because you’re now focusing on the wrong thing.
jimrandomh correctly pointed out to be that precision can have it’s own value for various kinds of comparison. I think he’s right. If A and B are each biased estimators of ‘a’ and ‘b’ but their bias is consistent (causing lower accuracy) but their precision is high, then I can make comparisons between A/a and B/b over time and between each other in ways I can’t even if the estimators were less biased but higher noise.
Still though it’s here is that the estimator is tracking a real fixed thing.
If I were to try to improve my estimator, I’d look at the process as a whole it implements and try to improve that rather than just trying to make the answer come out the same.