And suppose it’s public knowledge who these “smart” players are. … “smart” players actually end up worse off than “normal” players.
I would say that the proof still goes through. Receiving information cannot hurt you. But if other agents acquire information that you have acquired information—well, that can hurt you.
Are you saying “Someone else receiving information can hurt you”? Because the injury to you arises from the information the other party received. Regardless of whether you receive any information at all!
Does their thinking you received information have anything at all to do with your receiving information, even slightly correlated? If it does, then you have a situation in which receiving information hurts you and the proof only goes through because it doesn’t consider the other agents.
It explicitly considers only cases where the information does not change payoffs. This is not interesting. This is akin to saying ‘assume getting extra information either results in a gain or no loss; obviously, extra information weakly dominates not getting the extra information since in no circumstance is one worse off, and in some circumstances one os better off.’
Does their thinking you received information have anything at all to do with your receiving information, even slightly correlated? If it does, then you have a situation in which receiving information hurts you and the proof only goes through because it doesn’t consider the other agents.
This is a little interesting. The snap reply is that correlation does not imply causation, and we are discussing causation. But this snap reply implicitly privileges CDT over EDT and hence indirectly denigrates TDT/UDT. So, OK, your receiving information, through the correlation with someone else receiving information, is negatively correlated with your expected utility. And I continue to claim that you receiving the information doesn’t really cause the harm only because I still don’t understand the virtues of TDT/UDT.
It explicitly considers only cases where the information does not change payoffs.
Even more interesting. Are you thinking of cases in which my enjoyment of a movie is ruined because someone has given me an unwanted ‘spoiler’? Yes, that is a counterexample to the theorem. But I think that the reason why the theorem fails is that in this case naive consequentialism fails. It isn’t the end-result that generates utility. It is the path to that result. And possession of the spoiler information short-circuits the high utility pathway.
Are you thinking of cases in which my enjoyment of a movie is ruined because someone has given me an unwanted ‘spoiler’?
Not really, because this depends in part on human psychology, and we’d like to discuss more general agents than that. (Why couldn’t other agents find out the spoilers, decide it’s worth seeing, and then give themselves temporary amnesias so as to enjoy the twist ending? etc.)
I am thinking of cases where your seeking information has consequences. Cases like Omega are most obvious (‘Omega comes to you and says he filled both boxes only if you would not ask for additional information’ or something like that).
But they can be more subtle—for example, I’ve been reading up on price discrimination for one of my Nootropics footnotes, and it occurs to me that an Internet company like Amazon could snoop on your web history (through any number of bugs), assess your intellectual level and whether you comparison shop (receive additional information), and then dynamically adjust its prices to leave you with as little consumer surplus as possible—leaving you worse off than if you hadn’t been receiving information.
You can treat TDT/UDT as a causal thing, just with the causal arrows pointing in different directions. This theorem means SOMETHING in TDT, just not the same thing as it means in CDT.
(If my first statement is untrue, you can append “In most circumstances” or some other qualifier.)
Allow me to revise your rewrite. “Ceteris paribus, receiving information cannot hurt you. In some non-ceteris-paribus circumstances, receiving information might hurt you.”
Unfortunately, this is exactly what I am objecting to. I agree it is a good heuristic to receive information. This is not what the post is about; it is not about ceteris paribus. Emphasis added:
More information is never a bad thing.
…The second of these is always at least as large as the first.
In a post claiming to offer proofs, I take these universal qualifiers at face value. They may be true in the simplified model. They are not true in many other models, one of which I have linked.
Counter-example: http://web.archive.org/web/20090415130842/http://www.weidai.com/smart-losers.txt
Seems to me the proof does not go through because it only consider actions taken by the agent.
Quoting from the linked example:
I would say that the proof still goes through. Receiving information cannot hurt you. But if other agents acquire information that you have acquired information—well, that can hurt you.
Politicians instinctively know this, and hence seek “plausible deniability”.
Does the “blind carbon copy” feature in email count as a minimal example of “deniability engineering”? :)
Allow me to rewrite your post. ‘Receiving information cannot hurt you. But receiving information can hurt you.’
Are you saying “Someone else receiving information can hurt you”? Because the injury to you arises from the information the other party received. Regardless of whether you receive any information at all!
Does their thinking you received information have anything at all to do with your receiving information, even slightly correlated? If it does, then you have a situation in which receiving information hurts you and the proof only goes through because it doesn’t consider the other agents.
It explicitly considers only cases where the information does not change payoffs. This is not interesting. This is akin to saying ‘assume getting extra information either results in a gain or no loss; obviously, extra information weakly dominates not getting the extra information since in no circumstance is one worse off, and in some circumstances one os better off.’
This is a little interesting. The snap reply is that correlation does not imply causation, and we are discussing causation. But this snap reply implicitly privileges CDT over EDT and hence indirectly denigrates TDT/UDT. So, OK, your receiving information, through the correlation with someone else receiving information, is negatively correlated with your expected utility. And I continue to claim that you receiving the information doesn’t really cause the harm only because I still don’t understand the virtues of TDT/UDT.
Even more interesting. Are you thinking of cases in which my enjoyment of a movie is ruined because someone has given me an unwanted ‘spoiler’? Yes, that is a counterexample to the theorem. But I think that the reason why the theorem fails is that in this case naive consequentialism fails. It isn’t the end-result that generates utility. It is the path to that result. And possession of the spoiler information short-circuits the high utility pathway.
Not really, because this depends in part on human psychology, and we’d like to discuss more general agents than that. (Why couldn’t other agents find out the spoilers, decide it’s worth seeing, and then give themselves temporary amnesias so as to enjoy the twist ending? etc.)
I am thinking of cases where your seeking information has consequences. Cases like Omega are most obvious (‘Omega comes to you and says he filled both boxes only if you would not ask for additional information’ or something like that).
But they can be more subtle—for example, I’ve been reading up on price discrimination for one of my Nootropics footnotes, and it occurs to me that an Internet company like Amazon could snoop on your web history (through any number of bugs), assess your intellectual level and whether you comparison shop (receive additional information), and then dynamically adjust its prices to leave you with as little consumer surplus as possible—leaving you worse off than if you hadn’t been receiving information.
I’d be faintly surprised if they aren’t doing it already.
As would I. Reading http://33bits.org/2011/06/02/price-discrimination-is-all-around-you/ I infer that the research is going to discuss existing online price discrimination in future posts, to which I look forward.
You can treat TDT/UDT as a causal thing, just with the causal arrows pointing in different directions. This theorem means SOMETHING in TDT, just not the same thing as it means in CDT.
(If my first statement is untrue, you can append “In most circumstances” or some other qualifier.)
In the blackmail examples you should in general be worst off if they think you know they can blackmail you, but you don’t know they can blackmail you.
Allow me to revise your rewrite. “Ceteris paribus, receiving information cannot hurt you. In some non-ceteris-paribus circumstances, receiving information might hurt you.”
Unfortunately, this is exactly what I am objecting to. I agree it is a good heuristic to receive information. This is not what the post is about; it is not about ceteris paribus. Emphasis added:
In a post claiming to offer proofs, I take these universal qualifiers at face value. They may be true in the simplified model. They are not true in many other models, one of which I have linked.
Since I was downvoted so very severely, I’ll add another link, an entire paper by Nick Bostrom on all the kinds of information which receiving can hurt you: http://www.nickbostrom.com/information-hazards.pdf
In which case, you might as well include the costs for actually figuring it out.