“whatever percent of innocent convictions we’re comfortable with.” Isn’t the right way to do it. You need to weigh the expected utilities and go with that.
For example, say we have someone suspected of murder, and you think its only 20% sure that he did it, but executing him given that he’s the guilty saves an expected 10 lives, then you do it. If there was a second suspect (same p(guilt)) and you know only one of them is guilty, then you’d execute them both.
There are all sorts of disclaimers that could be added, but the point is that the threshold isn’t arbitrary, and intuitions don’t get close to the right answer.
“whatever percent of innocent convictions we’re comfortable with.” Isn’t the right way to do it. You need to weigh the expected utilities and go with that.
For example, say we have someone suspected of murder, and you think its only 20% sure that he did it, but executing him given that he’s the guilty saves an expected 10 lives, then you do it. If there was a second suspect (same p(guilt)) and you know only one of them is guilty, then you’d execute them both.
There are all sorts of disclaimers that could be added, but the point is that the threshold isn’t arbitrary, and intuitions don’t get close to the right answer.
What I said to Phil:
You are conflating Bayesian justice with utilitarian justice.