Shouldn’t these be a general rule of decision making? Not one-off rules but someting that will apply to killing, lying, turning on the LHC, AND going across the street for coffee?
Presumably, we did not evolve to be tempted to turn on the LHC. So there’s a different likelihood that we’re wrong about it despite good reasons, rather than wrong about telling a useful lie despite good reasons.
The real general rule of declaring your own reasoning fatally broken needs to take your own mind design as an argument. We can’t implement this, (it might only be impossible though) so we use rules that cover the cases we’ve figured out.
But I don’t see this as an honest strategy. It’s like deciding that relativity is too hard, so we shouldn’t build anything that goes too close to c.
The problems are: That relativity is really always at play, so our calculations will always be wrong and sometimes it will matter when our rule says it won’t. And: We don’t get the advantages of building things that go fast.
Likewise: Not-killing and not-lying as absolutes don’t give you protection from the many other ways our unreliable brains can fail us, and we’ll not lie or kill even when it really is the best option. At the least, we need to make our rules higher resoultion, and not with a bias to leniency. So find the criteria where we can kill or lie with low probabilities of self-error. (What specifies a “jews in the basement” type situation?) But also find the criteria where commonly accepted behaviors are only accepted because of biases.
I’m far less sure that it’s ok for me to order coffee than I am sure that it’s not ok to murder. I might fool myself into thinking some killing is justified, but I might also be fooling myself into thinking ordering coffee is ok. Murder is much more significant, but ordering coffee is the choice I’m making every day.
I think you’ve already posted some general rules for warning yourself that you’re probably fooling yourself. If these are insufficient in the cases of lying and murdering, then I don’t think they’re sufficient in general. It is the General cases (I’m guessing) that have more real impact.
And if you shore up the general rules, then for any hypothetical murder-a-young-hitler situation, you will be able to say “Well, in that situation you are subject to foo and bar cognitive biases and can’t know bif and baz about the situation, so you have X% probability of being mistaken in your justification.”
You’re able to state WHY it’s a bad idea even when it’s right. (or you find out X is close to 0)
On the other hand, there might be some biases that only come into play when we’re thinking about murdering, but I still think the detailed reasoning is superior.
Shouldn’t these be a general rule of decision making? Not one-off rules but someting that will apply to killing, lying, turning on the LHC, AND going across the street for coffee?
Presumably, we did not evolve to be tempted to turn on the LHC. So there’s a different likelihood that we’re wrong about it despite good reasons, rather than wrong about telling a useful lie despite good reasons.
The real general rule of declaring your own reasoning fatally broken needs to take your own mind design as an argument. We can’t implement this, (it might only be impossible though) so we use rules that cover the cases we’ve figured out.
But I don’t see this as an honest strategy. It’s like deciding that relativity is too hard, so we shouldn’t build anything that goes too close to c.
The problems are: That relativity is really always at play, so our calculations will always be wrong and sometimes it will matter when our rule says it won’t. And: We don’t get the advantages of building things that go fast.
Likewise: Not-killing and not-lying as absolutes don’t give you protection from the many other ways our unreliable brains can fail us, and we’ll not lie or kill even when it really is the best option. At the least, we need to make our rules higher resoultion, and not with a bias to leniency. So find the criteria where we can kill or lie with low probabilities of self-error. (What specifies a “jews in the basement” type situation?) But also find the criteria where commonly accepted behaviors are only accepted because of biases.
I’m far less sure that it’s ok for me to order coffee than I am sure that it’s not ok to murder. I might fool myself into thinking some killing is justified, but I might also be fooling myself into thinking ordering coffee is ok. Murder is much more significant, but ordering coffee is the choice I’m making every day.
I think you’ve already posted some general rules for warning yourself that you’re probably fooling yourself. If these are insufficient in the cases of lying and murdering, then I don’t think they’re sufficient in general. It is the General cases (I’m guessing) that have more real impact.
And if you shore up the general rules, then for any hypothetical murder-a-young-hitler situation, you will be able to say “Well, in that situation you are subject to foo and bar cognitive biases and can’t know bif and baz about the situation, so you have X% probability of being mistaken in your justification.”
You’re able to state WHY it’s a bad idea even when it’s right. (or you find out X is close to 0)
On the other hand, there might be some biases that only come into play when we’re thinking about murdering, but I still think the detailed reasoning is superior.