If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I’m thinking about how to characterize “information” in problems where one agent is trying to deceive another. If A successfully deceives B, what is the “information gain” for B? He thinks he knows more about the world; does this mean that information gain cannot be measured from the inside?
The sentence you quote actually sounds like a Harry sentence to me. Specifically the part where doing an unethical thing causes the good people to not be able to trust each other and work together any more, which is a key part of the law of good.
If you lie to people, they should trust you less. Observing you lying should reduce their confidence in your statements. However, there is nothing in the fundamental rules of the universe that say that people notice when they are deceived, even after the fact, or that they will trust you less by any amount. Believing that lying, or even being caught lying, will result in total collapse of confidence without further justification is falling for the just-world fallacy.
If you saw a man lying to his child about the death of the family dog, you wouldn’t (hopefully) immediately refuse to ever have business dealings with such a deceptive, amoral individual. Believing that all lies are equivalent, or that lie frequency does not matter, is to fall for the fallacy of grey.
“Unethical” and “deceptive” are different. See hpmor ch51 for hpmor!Harry agreeing to lie for moral reasons. See also counterarguments to Kant’s Categorical Imperative (lying is always wrong, literally never lie).
The point about information theory stands.
Note that “importance” can be broadly construed as “relevance to the practical question of lying to actual people in real life”. This is why the information-theoretic argument ranks so low.
The idea, I believe, is similar to asking death row prisoners if they are innocent. If you establish that you’re willing to lie about sufficiently important things for non-obvious reasons, people can’t trust you when those reasons are likely to be in play. For Eliezer’s stakes, this would be literally all the time, since it would “be justified” to lie in any situation to save the world.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I’m thinking about how to characterize “information” in problems where one agent is trying to deceive another. If A successfully deceives B, what is the “information gain” for B? He thinks he knows more about the world; does this mean that information gain cannot be measured from the inside?
The sentence you quote actually sounds like a Harry sentence to me. Specifically the part where doing an unethical thing causes the good people to not be able to trust each other and work together any more, which is a key part of the law of good.
My counterpoints, in broad order of importance:
If you lie to people, they should trust you less. Observing you lying should reduce their confidence in your statements. However, there is nothing in the fundamental rules of the universe that say that people notice when they are deceived, even after the fact, or that they will trust you less by any amount. Believing that lying, or even being caught lying, will result in total collapse of confidence without further justification is falling for the just-world fallacy.
If you saw a man lying to his child about the death of the family dog, you wouldn’t (hopefully) immediately refuse to ever have business dealings with such a deceptive, amoral individual. Believing that all lies are equivalent, or that lie frequency does not matter, is to fall for the fallacy of grey.
“Unethical” and “deceptive” are different. See hpmor ch51 for hpmor!Harry agreeing to lie for moral reasons. See also counterarguments to Kant’s Categorical Imperative (lying is always wrong, literally never lie).
The point about information theory stands.
Note that “importance” can be broadly construed as “relevance to the practical question of lying to actual people in real life”. This is why the information-theoretic argument ranks so low.
The idea, I believe, is similar to asking death row prisoners if they are innocent. If you establish that you’re willing to lie about sufficiently important things for non-obvious reasons, people can’t trust you when those reasons are likely to be in play. For Eliezer’s stakes, this would be literally all the time, since it would “be justified” to lie in any situation to save the world.
See response to Ben Pace for counterpoints.