In this case? Yes. Even if the Nazis had Omega-like powers, you’d still want to fool them—they’re not any sort of game-theoretic counterpart who you wish would trust your honesty. I’m not entirely sure I’m describing all the factors here, but this scenario doesn’t even feel to me like it’s about the quantity ordinarily known as honesty, there is no bond you are breaking.
The proper form of this scenario is if a Nazi soldier who’s feeling conflicted comes to you and says he wants to talk to you, but only if you vow silence. You do, and he tells you that he suspects there’s a Jewish family next door. He gives you a chance to talk him out of turning them in. You fail. Do you warn the family next door? Now that’s a dilemma of honesty with someone else’s life at stake.
And of course it can get even worse. E.g. Knut Haukelid.
Can the world be so easily partitioned into those we simply wish to fool, and those with whom we might need to cooperate? Is there a simple, parsimonious precommitment to honesty that allows for fooling nazis, taking confessions, and being believed when we point out global risks? I guess that’s what this post was getting at.
“To tell the truth is a duty, but is a duty only with regard to the man who has a right to the truth.”
Kant disagrees and seems to warn that the principle of truth telling is universal; you can’t go around deciding who has a right to truth and who does not. Furthermore, he suggests that your lie could have terrible unforeseen consequences.
Lie to the Nazis who you feel “don’t deserve the truth” and then they end up treating everyone on the rest of the block like liars and sending all sorts of people to the concentration camps or outright killing them because its not worth trying to ferret out truth etc..etc..etc...
Eliezer:
When I was reading through your other article I thought the “fate of the world” part suggested that not lying should be the basis for a universalizable duty like Kant’s. The existence of a future “Fate of the world” event makes it seem like you are getting at the same unforeseen consequences point as The Big K -is this accurate?
I am concerned that deciding on who is rational enough to treat honestly is a slippery slope.
Personally, this seems like a point where you take a metaphysical stand, rationally work your way through the options within your axiomatic system and then apply your rational program to the choice at hand. I am more utilitarian than Kant, but it is not hard to ignore “proximity” and come up with a cost/benefit calculation that agrees with him.
I don’t see the Haukelid comparison. Haukelid maximised for expected lives saved; here it’s clear what decision does that, but the cost is that you wouldn’t be in a position to do that if the other party had known that’s what you would do.
In this case? Yes. Even if the Nazis had Omega-like powers, you’d still want to fool them—they’re not any sort of game-theoretic counterpart who you wish would trust your honesty. I’m not entirely sure I’m describing all the factors here, but this scenario doesn’t even feel to me like it’s about the quantity ordinarily known as honesty, there is no bond you are breaking.
The proper form of this scenario is if a Nazi soldier who’s feeling conflicted comes to you and says he wants to talk to you, but only if you vow silence. You do, and he tells you that he suspects there’s a Jewish family next door. He gives you a chance to talk him out of turning them in. You fail. Do you warn the family next door? Now that’s a dilemma of honesty with someone else’s life at stake.
And of course it can get even worse. E.g. Knut Haukelid.
Can the world be so easily partitioned into those we simply wish to fool, and those with whom we might need to cooperate? Is there a simple, parsimonious precommitment to honesty that allows for fooling nazis, taking confessions, and being believed when we point out global risks? I guess that’s what this post was getting at.
“To tell the truth is a duty, but is a duty only with regard to the man who has a right to the truth.”
Kant disagrees and seems to warn that the principle of truth telling is universal; you can’t go around deciding who has a right to truth and who does not. Furthermore, he suggests that your lie could have terrible unforeseen consequences.
Lie to the Nazis who you feel “don’t deserve the truth” and then they end up treating everyone on the rest of the block like liars and sending all sorts of people to the concentration camps or outright killing them because its not worth trying to ferret out truth etc..etc..etc...
Eliezer:
When I was reading through your other article I thought the “fate of the world” part suggested that not lying should be the basis for a universalizable duty like Kant’s. The existence of a future “Fate of the world” event makes it seem like you are getting at the same unforeseen consequences point as The Big K -is this accurate?
I am concerned that deciding on who is rational enough to treat honestly is a slippery slope.
Personally, this seems like a point where you take a metaphysical stand, rationally work your way through the options within your axiomatic system and then apply your rational program to the choice at hand. I am more utilitarian than Kant, but it is not hard to ignore “proximity” and come up with a cost/benefit calculation that agrees with him.
I don’t see the Haukelid comparison. Haukelid maximised for expected lives saved; here it’s clear what decision does that, but the cost is that you wouldn’t be in a position to do that if the other party had known that’s what you would do.