Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.