IMO, VoI is also not a sufficient criteria for defining manipulation… I’ll list a few problems I have with it, OTTMH:
1) It seems to reduce it to “providing misinformation, or providing information to another agent that is not maximally/sufficiently useful for them (in terms of their expected utility)”. An example (due to Mati Roy) of why this doesn’t seem to match our intuition is: what if I tell someone something true and informative that serves (only) to make them sadder? That doesn’t really seem like manipulation (although you could make a case for it).
2) I don’t like the “maximally/sufficiently” part; maybe my intuition is misleading, but manipulation seems like a qualitative thing to me. Maybe we should just constrain VoI to be positive?
3) Actually, it seems weird to talk about VoI here; VoI is prospective and subjective… it treats an agent’s beliefs as real and asks how much value they should expect to get from samples or perfect knowledge, assuming these samples or the ground truth would be distributed according to their beliefs; this makes VoI strictly non-negative. But when we’re considering whether to inform an agent of something, we might recognize that certain information we’d provide would actually be net negative (see my top level comment for an example). Not sure what to make of that ATM...
re: #2, VoI doesn’t need to be constrained to be positive. If in expectation you think the information will have a net negative impact, you shouldn’t get the information.
re: #3, of course VoI is subjective. It MUST be, because value is subjective. Spending 5 minutes to learn about the contents of a box you can buy is obviously more valuable to you than to me. Similarly, if I like chocolate more than you, finding out if a cake has chocolate is more valuable for me than for you. The information is the same, the value differs.
RE “re: #3”, my point was that it doesn’t seem like VoI is the correct way for one agent to think about informing ANOTHER agent. You could just look at the change in expected utility for the receiver after updating on some information, but I don’t like that way of defining it.
IMO, VoI is also not a sufficient criteria for defining manipulation… I’ll list a few problems I have with it, OTTMH:
1) It seems to reduce it to “providing misinformation, or providing information to another agent that is not maximally/sufficiently useful for them (in terms of their expected utility)”. An example (due to Mati Roy) of why this doesn’t seem to match our intuition is: what if I tell someone something true and informative that serves (only) to make them sadder? That doesn’t really seem like manipulation (although you could make a case for it).
2) I don’t like the “maximally/sufficiently” part; maybe my intuition is misleading, but manipulation seems like a qualitative thing to me. Maybe we should just constrain VoI to be positive?
3) Actually, it seems weird to talk about VoI here; VoI is prospective and subjective… it treats an agent’s beliefs as real and asks how much value they should expect to get from samples or perfect knowledge, assuming these samples or the ground truth would be distributed according to their beliefs; this makes VoI strictly non-negative. But when we’re considering whether to inform an agent of something, we might recognize that certain information we’d provide would actually be net negative (see my top level comment for an example). Not sure what to make of that ATM...
re: #2, VoI doesn’t need to be constrained to be positive. If in expectation you think the information will have a net negative impact, you shouldn’t get the information.
re: #3, of course VoI is subjective. It MUST be, because value is subjective. Spending 5 minutes to learn about the contents of a box you can buy is obviously more valuable to you than to me. Similarly, if I like chocolate more than you, finding out if a cake has chocolate is more valuable for me than for you. The information is the same, the value differs.
FWICT, both of your points are actually responses to be point (3).
RE “re: #2”, see: https://en.wikipedia.org/wiki/Value_of_information#Characteristics
RE “re: #3”, my point was that it doesn’t seem like VoI is the correct way for one agent to think about informing ANOTHER agent. You could just look at the change in expected utility for the receiver after updating on some information, but I don’t like that way of defining it.