No, that isn’t what taw is saying. The point is that having more information and being known to have it can be extremely bad for you. This is not a counterexample to the theorem, which considers two scenarios whose only difference is in how much you know, but in real-life applications that’s very frequently not the case.
I don’t think taw’s blackmail example is quite right as it stands, but here’s a slight variant that is. A Simple Blackmailer will publish the pictures if you don’t give him the money. Obviously if there is such a person, and if there are no further future consequences, and if you prefer losing the money to losing your reputation, it is better for you to know about the blackmailer so you can give him the money. But now consider a Clever Blackmailer, who will publish the pictures if you don’t give him the money and if he thinks you might give him the money if he doesn’t. If there’s a Clever Blackmailer and you don’t know it (and he knows you don’t know it) then he won’t bother publishing because the threat has no force for you—since you don’t even know there is one. But if you learn of his existence and he knows this then he will publish the pictures unless you give him the money, so you have to give him the money. So, in this situation, you lose by discovering his existence. But only because he knows that you’ve discovered it.
The theorem says what it says. Either there is an error in the proof, in which case taw can point it out, or these objections are outside its scope, and irrelevant.
I am unsure of what the point of posting this theorem was. Yes, it holds as stated, but it seems to have very little applicability to the real world. Your tl;dr version is “More information is never a bad thing”, but that is clearly false if we’re talking about real people making real decisions.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
Yes, and not at all coincidentally, some people here (e.g. me) have argued that one shouldn’t use Aumann’s theorem and related results as anything other than a philosophical argument for Bayesianism and that trying to use it in practical contexts rarely makes sense.
The same is also true about any number of obscure mathematical theorems which nevertheless don’t get posted here. That doesn’t help clarify what makes this result interesting.
They are theorems—you cannot avoid the conclusions if you accept the premises.
Real people often violate the conclusions.
Real people will expect an experiment to update their beliefs in a certain direction, they will refuse to perform an observation on the grounds that they’d rather not know, and they persistently disagree on many things.
There are many responses one can make to this situation: disputing whether Bayesian utility-maximisation is the touchstone of rational behaviour, disputing whether imperfectly rational people can come anywhere near the ideal implied by these theorems, and so on. (For example.) But whatever your response, these theorems demand one.
For those attempting to build an AGI on the principle of Bayesian utility-maximisation, these theorems say that it must behave in certain ways. If it does not behave in accordance with their conclusions, then it has violated their hypotheses.
This, to me, is what makes these theorems interesting, and their simplicity and obviousness enhance that.
(I would personally not put this in the same category in interestingness as Aumann’s disagreement. It seems like the reasons why Aumann doesn’t apply in real life are far less obvious than the reasons for why this theorem doesn’t. But that’s just me—I get your reasoning now.)
No, that isn’t what taw is saying. The point is that having more information and being known to have it can be extremely bad for you. This is not a counterexample to the theorem, which considers two scenarios whose only difference is in how much you know, but in real-life applications that’s very frequently not the case.
I don’t think taw’s blackmail example is quite right as it stands, but here’s a slight variant that is. A Simple Blackmailer will publish the pictures if you don’t give him the money. Obviously if there is such a person, and if there are no further future consequences, and if you prefer losing the money to losing your reputation, it is better for you to know about the blackmailer so you can give him the money. But now consider a Clever Blackmailer, who will publish the pictures if you don’t give him the money and if he thinks you might give him the money if he doesn’t. If there’s a Clever Blackmailer and you don’t know it (and he knows you don’t know it) then he won’t bother publishing because the threat has no force for you—since you don’t even know there is one. But if you learn of his existence and he knows this then he will publish the pictures unless you give him the money, so you have to give him the money. So, in this situation, you lose by discovering his existence. But only because he knows that you’ve discovered it.
The theorem says what it says. Either there is an error in the proof, in which case taw can point it out, or these objections are outside its scope, and irrelevant.
I am unsure of what the point of posting this theorem was. Yes, it holds as stated, but it seems to have very little applicability to the real world. Your tl;dr version is “More information is never a bad thing”, but that is clearly false if we’re talking about real people making real decisions.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
Yes, and not at all coincidentally, some people here (e.g. me) have argued that one shouldn’t use Aumann’s theorem and related results as anything other than a philosophical argument for Bayesianism and that trying to use it in practical contexts rarely makes sense.
The same is also true about any number of obscure mathematical theorems which nevertheless don’t get posted here. That doesn’t help clarify what makes this result interesting.
Here are three theorems about Bayesian reasoning and utility theory:
Your prior expectation of your posterior expectation is equal to your prior expectation.
Your prior expectation of your posterior expected utility is not less than your prior expected utility.
Two people with common priors and common knowledge of their posteriors cannot disagree.
ETA: 4. P(A&B) ⇐ P(A).
In all these cases:
The mathematical content borders on trivial.
They are theorems—you cannot avoid the conclusions if you accept the premises.
Real people often violate the conclusions.
Real people will expect an experiment to update their beliefs in a certain direction, they will refuse to perform an observation on the grounds that they’d rather not know, and they persistently disagree on many things.
There are many responses one can make to this situation: disputing whether Bayesian utility-maximisation is the touchstone of rational behaviour, disputing whether imperfectly rational people can come anywhere near the ideal implied by these theorems, and so on. (For example.) But whatever your response, these theorems demand one.
For those attempting to build an AGI on the principle of Bayesian utility-maximisation, these theorems say that it must behave in certain ways. If it does not behave in accordance with their conclusions, then it has violated their hypotheses.
This, to me, is what makes these theorems interesting, and their simplicity and obviousness enhance that.
Thanks, that clarifies things.
(I would personally not put this in the same category in interestingness as Aumann’s disagreement. It seems like the reasons why Aumann doesn’t apply in real life are far less obvious than the reasons for why this theorem doesn’t. But that’s just me—I get your reasoning now.)