I am unsure of what the point of posting this theorem was. Yes, it holds as stated, but it seems to have very little applicability to the real world. Your tl;dr version is “More information is never a bad thing”, but that is clearly false if we’re talking about real people making real decisions.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
Yes, and not at all coincidentally, some people here (e.g. me) have argued that one shouldn’t use Aumann’s theorem and related results as anything other than a philosophical argument for Bayesianism and that trying to use it in practical contexts rarely makes sense.
The same is also true about any number of obscure mathematical theorems which nevertheless don’t get posted here. That doesn’t help clarify what makes this result interesting.
They are theorems—you cannot avoid the conclusions if you accept the premises.
Real people often violate the conclusions.
Real people will expect an experiment to update their beliefs in a certain direction, they will refuse to perform an observation on the grounds that they’d rather not know, and they persistently disagree on many things.
There are many responses one can make to this situation: disputing whether Bayesian utility-maximisation is the touchstone of rational behaviour, disputing whether imperfectly rational people can come anywhere near the ideal implied by these theorems, and so on. (For example.) But whatever your response, these theorems demand one.
For those attempting to build an AGI on the principle of Bayesian utility-maximisation, these theorems say that it must behave in certain ways. If it does not behave in accordance with their conclusions, then it has violated their hypotheses.
This, to me, is what makes these theorems interesting, and their simplicity and obviousness enhance that.
(I would personally not put this in the same category in interestingness as Aumann’s disagreement. It seems like the reasons why Aumann doesn’t apply in real life are far less obvious than the reasons for why this theorem doesn’t. But that’s just me—I get your reasoning now.)
I am unsure of what the point of posting this theorem was. Yes, it holds as stated, but it seems to have very little applicability to the real world. Your tl;dr version is “More information is never a bad thing”, but that is clearly false if we’re talking about real people making real decisions.
The same is true, mutatis mutandis, of Aumann’s agreement theorem. Little applicability to the real world, and the standard tl;dr version “rational agents cannot agree to disagree” is clearly false if etc.
Yes, and not at all coincidentally, some people here (e.g. me) have argued that one shouldn’t use Aumann’s theorem and related results as anything other than a philosophical argument for Bayesianism and that trying to use it in practical contexts rarely makes sense.
The same is also true about any number of obscure mathematical theorems which nevertheless don’t get posted here. That doesn’t help clarify what makes this result interesting.
Here are three theorems about Bayesian reasoning and utility theory:
Your prior expectation of your posterior expectation is equal to your prior expectation.
Your prior expectation of your posterior expected utility is not less than your prior expected utility.
Two people with common priors and common knowledge of their posteriors cannot disagree.
ETA: 4. P(A&B) ⇐ P(A).
In all these cases:
The mathematical content borders on trivial.
They are theorems—you cannot avoid the conclusions if you accept the premises.
Real people often violate the conclusions.
Real people will expect an experiment to update their beliefs in a certain direction, they will refuse to perform an observation on the grounds that they’d rather not know, and they persistently disagree on many things.
There are many responses one can make to this situation: disputing whether Bayesian utility-maximisation is the touchstone of rational behaviour, disputing whether imperfectly rational people can come anywhere near the ideal implied by these theorems, and so on. (For example.) But whatever your response, these theorems demand one.
For those attempting to build an AGI on the principle of Bayesian utility-maximisation, these theorems say that it must behave in certain ways. If it does not behave in accordance with their conclusions, then it has violated their hypotheses.
This, to me, is what makes these theorems interesting, and their simplicity and obviousness enhance that.
Thanks, that clarifies things.
(I would personally not put this in the same category in interestingness as Aumann’s disagreement. It seems like the reasons why Aumann doesn’t apply in real life are far less obvious than the reasons for why this theorem doesn’t. But that’s just me—I get your reasoning now.)