I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.