The mathematical result is trivial, but its interpretation as the practical advice “obtaining further information is always good” is problematic, for the reason taw points out.
A particular agent can have wrong information, and make a poor decision as a result of combining the wrong information with the new information. Since we’re assuming that the additional information is correct, I think it’s reasonable to also stipulate that all previous information is correct.
Actually, I thought of that objection myself, but decided against writing it down. First of all, it’s not quite right to refer to past information as ‘right’ or ‘wrong’ because information doesn’t arrive in the form of propositions-whose-truth-is-assumed, but in the form of sense data.* It’s better to talk about ‘misleading information’ rather than ‘wrong information’. When adversary A tells you P, which is a lie, your information is not P but “A told me P”. (Actually, it’s not even that, but you get the idea.) If you don’t know A is an adversary then “A told me P” is misleading, but not wrong.
Now, suppose the agent’s prior has got to where it is due to the arrival of misleading information. Then relative to that prior, the agent still increases its expected utility whenever it acquires new data (ignoring taw’s objection).
(On the other hand, if we’re measuring expectations wrt the knowledge of some better informed agent then yes, acquiring information can decrease expected utility. This is for the same reason that, in a Gettier case, learning a new true and relevant fact (e.g. most nearby barn facades are fake) can cause you to abandon a true belief in favour of a false one.)
* Yes yes, I know statements like this are philosophically contentious, but within LW they’re assumptions to work from rather than be debated.
The mathematical result is trivial, but its interpretation as the practical advice “obtaining further information is always good” is problematic, for the reason taw points out.
Actually, I thought of that objection myself, but decided against writing it down. First of all, it’s not quite right to refer to past information as ‘right’ or ‘wrong’ because information doesn’t arrive in the form of propositions-whose-truth-is-assumed, but in the form of sense data.* It’s better to talk about ‘misleading information’ rather than ‘wrong information’. When adversary A tells you P, which is a lie, your information is not P but “A told me P”. (Actually, it’s not even that, but you get the idea.) If you don’t know A is an adversary then “A told me P” is misleading, but not wrong.
Now, suppose the agent’s prior has got to where it is due to the arrival of misleading information. Then relative to that prior, the agent still increases its expected utility whenever it acquires new data (ignoring taw’s objection).
(On the other hand, if we’re measuring expectations wrt the knowledge of some better informed agent then yes, acquiring information can decrease expected utility. This is for the same reason that, in a Gettier case, learning a new true and relevant fact (e.g. most nearby barn facades are fake) can cause you to abandon a true belief in favour of a false one.)
* Yes yes, I know statements like this are philosophically contentious, but within LW they’re assumptions to work from rather than be debated.