I feel like values are defined over outcomes, while biases are defined over cognitive processes.
You could value a bias I suppose, but then you’d be valuing executing particular algorithms over like, saving the world. If that’s the case, I think that the people arguing for a bias are looking for an easy way out of a problem, or more attached to their identity than I believe to be useful.
Not that I’ve reflected that much on it, but that’s my intuition coming in.
I feel like values are defined over outcomes, while biases are defined over cognitive processes.
You could value a bias I suppose, but then you’d be valuing executing particular algorithms over like, saving the world. If that’s the case, I think that the people arguing for a bias are looking for an easy way out of a problem, or more attached to their identity than I believe to be useful.
Not that I’ve reflected that much on it, but that’s my intuition coming in.
That seems good, check out my favored solution. I’ll have to think about yours more to see if it works.
So how would you apply this idea to the examples in the OP?
This is also my understanding of the distinction.