Epistemic status: trying hard to explain, not persuade. Part of me wants to fight the good fight and protect Bayesian orthodoxy against a corrupting heresy.
MIT Professor Kevin Dorst (LessWrong username: kevin-dorst) has a new argument that, contrary to standard Bayesianism, it can be rational to predict in which directions your beliefs will change.
He explicitly argues against the Conservation of Expected Evidence, and applies this argument to a variety of situations, particularly to political polarization.
His new paper has been generating attention, including from renowed figures such as Steven Pinker, and Dorst has been invited on podcasts to defend his position. Dorst also recently made a frontpage post here on this community, named Polarization is Not (Standard) Bayesian, stating similar claims. These claims deserve to be addressed.
Reflection Violations
Here is one of his examples (from the paper):
Let s be a politically-coded belief, e.g. that guns increase safety. You are currently unsure about s, say 50-50, but if you also know that you’re going to study in a liberal college. Students going to liberal colleges are likely to leave with more liberal opinions, and you don’t think you are special regarding that effect. He argues that it can be rational to both hold your current belief regarding s, and to expect your future belief on s to move in the liberal direction.
He calls situations like that reflection violations. Here Dorst gives us another example:
Reflection violations are mundane. We can often predict how our actions will shift our beliefs, even when those actions provide no evidence about the issue. Not long ago, I had both Piketty’s Capital in the 21st Century and Pinker’s Enlightenment Now on my shelf. It wasn’t hard to predict that reading Pinker would make me more optimistic about our economic system, and reading Piketty would make me less.
In the LessWrong article, he gives a different example of reflection violations (also called martingale violations), arguing that even if you dont know in which direction your beliefs are likely to move (e.g. in the liberal or conservative direction), there can still be a reflection violation on the correlation of beliefs:
Suppose Nathan is about to immerse himself in political discourse and form some opinions. Currently he’s 50-50 on whether Abortion is wrong (A) and on whether Guns increase safety (G). Naturally, he treats the two independently: becoming convinced that abortion is wrong wouldn’t shift his opinions about guns. As a consequence, if he’s a Bayesian then he’s also 50-50 on the biconditional A<–>G.[7]
He knows he’s about to become opinionated. And he knows that almost everyone either becomes a Republican (so they believe both A and G) or a Democrat (so they disbelieve both A and G). Either way, they’re more confident in the biconditional than he is. He has no reason to think he’s special in this regard, so he can expect that he’ll become more confident of it too—violating martingale.
(Supposed) Mechanism
Dorst argues that such reflection violations can derive from rational Bayesian updating. In his theory, this happens whenever beliefs need to be updated based what he calls ambiguous evidence.
Evidence is ambiguous when it’s hard to know what to make of it—when it’s rational to be unsure what it’s rational to think (Ellsberg 1961, 661).
He thankfully gives a low-level example of a setting in which his “ambiguous evidence” could appear:
Consider a word-search task (cf. Elga and Rayo 2020). Given a string of letters and some blanks, you have a few seconds to figure out whether there’s an (English) completion. For example:
P_A_ET
And the answer is… yes, there is a completion.
Another:
P_G_ER
And the answer is… no, there is no completion.
He argues there is an assymetry between finding a word completion and not finding it, that updating based on the former is much easier than on the later, and that this leads to ambiguity.
If you find a completion (‘planet!’), you (often) know that it’s rational to be certain there’s a word (that ~P(Word)=1). But if you don’t find a completion, you don’t know how confident to be—“Maybe I should be doubtful (maybe ~P(Word) is low), but maybe I’m missing something obvious (maybe ~P(Word) is high).” I’ll argue that this generates an ambiguity-asymmetry between completable and uncompletable searches, rationalizing expectable polarization.
Here he argues that this assymetry can generate a reflection violation:
Meet Haley. She’s wondering whether a fair coin landed heads. I’ll show her a word-search determined by the outcome: if heads, it’ll be completable; if tails, it’ll be uncompletable. Thus her credence in heads equals her credence it’s completable. She’ll have 7 seconds, then she’ll write down her credence. She knows all of this.
Let H and ~H be the rational prior and posterior for Haley. She should initially be 50- 50 on heads: H(Heads)=0.5. But I claim her estimate for her posterior rational credence should be higher than 50%: EH(^H(Heads))>0.5.
(...)
Intuitively: it’s easier for her to assess her evidence when the string is completable (when the coin lands heads) than when not. So if heads, her credence should (on average) increase a lot; if tails it should (on average) decrease a bit; and the average of ‘increase a lot’ and ‘decrease a bit’ is ‘increase a bit’.
Dorst’s Objection to the Standard Response, and the Bayesian Refutation
Dorst correctly identifies the standard reply:
Standard Bayesians will balk. They’ll say that we must find the most fine-grained question (partition) Q that Haley can always answer with certainty, and that she’s rational iff she conditions on the true answer to Q. It’s as if she rummages around in her head for a completion; at the end all she learns is either that the search succeeded (Find) or failed (¬Find); so Q = {Find, ¬Find}. (If she learns more, they’ll insist there’s a finer-grained Q to update on.)
He proceeds to calculations showing that, under this standard approach, which he calls a “model”, no reflection violation occurs. He then objects to the correctness of such “model”.
I object. It’s implausible to insist that such a model is always correct. As I’ve argued, that doesn’t follow from the justifications of Bayesianism (§3). Moreover, it rules out the possibility of ambiguity, so ignores the most salient feature of a word-search: that it’s easier to know what to make of your evidence when you’ve found a word than when you haven’t.
This objection unfortunately confuses many different concepts. He first confuses a prior over the joint distribution (Word, Found) with a model of how these variables interact.Having a prior over the joint distribution is not the same as having a “model that is always correct”.
Quite the opposite, a good Bayeasian prior is supposed to reflect the uncertainty regarding which models are correct; the prior distribution is what happens when, in theory, you marginalize over all possible models, or in practice, when you use an approximation for that. Nowhere a Bayesian has to place absolute trust in the correctness of any of the models, contrary to what has been implied by Dorst.
Rather, the common intuition of not placing absolutely trust in “models” is used to the effect of arguing that one should have no prior at all over joint distributions of possible evidence. This despite his concession that one can have priors over the simpler variables we may be ultimately interested in.
Of course, priors over joint distributions are nothing but a collection of current beliefs over each possibility in the product space. If you have a current belief for (Word & Found), another for (Word & ~Found), another for (~Word & Found) (maybe close to zero if your search generates little errors), and finally one for (~Word & ~Found), then these beliefs constitute the prior you need. He himself should concede this is possible, as he doesn’t deny the existance of current beliefs; but once you accept having a more complex prior over the joint distribution (including the possible evidence), the evidence is no longer ambiguous, and any supposed reflection violation becomes impossible.
Dorst continues with an intuition:
If you haven’t found one when the 7-second timer goes off, your credence that there’s a word may have gone down or gone up, but you won’t (shouldn’t!) be willing to bet the farm that it’s moved in the rational direction. After all, sometimes it doesn’t: if your credence went down to 1⁄3, and then I whisper ‘heart’, you might think, “Oh! I should’ve seen that...”. It was rational for you to have more than 1⁄3 credence in a completion; after all, you know that ‘heart’ is a word—you just failed to make proper use of that knowledge.
Of course, the intuition for “betting the farm” is not a good framing for thinking about probabilities. Standard risk-averseness means you probably shouldn’t “bet the farm” either way! It may be particularly painful to “lose a bet” due to “missing out” on an easy word, or one you might think with benefit of hindsight should have been “easy” to spot, but such assymetry of feelings shouldn’t interfere with our careful analysis of probabilities.
Moving forward, the expression “Oh! I should’ve seen that...”. can mean any of the following:
“I should have found it!”, as in “I’m frustrated my word search didn’t work”. I might try to consciently apply negative reward to my neural “word-search” circuits, as if telling them “you did worse than expected, whatever was tried didn’t work”. This is not about rational updates of beliefs based on new evidence, but rather about whether one can reasonably expect some source of evidence (in this case your own word-search skills) to be of higher quality in the future.
“I regret giving such a low probability (1/3) to to a word existing, as it turned out there was one! I wish I could just… erh… have magically known that..., or at least kept my probability near my original 50%, even though the only evidence I had (not finding the word) pointed me to decreasing it.” This is not what productive or rational thought looks like.
Even though I didn’t find the word, I had other clues, additional evidence, that could have predicted that a word existed. “I should have also updated based on these other cues, not only on the fact I didn’t find the word”. This is a perfectly valid point (considering more evidence should improve accuracy!) but is completely unrelated to the matter of how to rationally update beliefs based on the evidence at hand.
More Ambiguity? Confusion?
In the same example, which serves as the main illustration for his crucial mechanism, Dorst continues arguing against the Standard Bayesian interpretation by appealing to additional complexity in the evidence, which in his opinion exemplifies the notion of “ambiguity”.
These are intuitions. If we couldn’t make precise sense of them, perhaps they could be ignored. But we can—just introduce ambiguity. Here’s one way to do so. There’s more that Haley (is and) should be sensitive to than what she can settle withWhat is an ” certainty. Beyond whether she found a completion, there’s the question of whether the string is ‘word-like’—whether it contains subtle hints that it’s completable. If it does, she should increase her credence it’s completable; if it doesn’t, she should decrease it. But—and here’s where ambiguity comes in—she can’t always tell with certainty whether it’s word-like, and hence can’t always tell whether her credence should go up or down.
Here he adds an additional variable: besides there being a completion (Word) and Haley finding it or not (Find), he adds a further evidence-variable (Word-like) denoting whether Haley finds the string to have “subtle hints that it’s completable”.
The example is complicated further by adding that Haley is unsure of whether she observes word-likeness or not. The careful reader starts to feel some force acting to obscure any concrete consideration of what, if anything, is being observed by Haley. We sure can’t have a proper conversation about how to update beliefs based on observations when there is no agreement on what the observations are.
He details his model further (line breaks added by myself for clarity):
Here’s a simple model (details in §4.1). Suppose, as before, it’s 1⁄2 likely there’s no word (and so she doesn’t find one), 1⁄4 likely there’s a word she finds, and 1⁄4 likely there’s a word she doesn’t find. Moreover, suppose she knows the string will be word-like iff there’s a word. If she finds a word, she’s rational to become certain there’s one: ~H(Word)=1.
If she doesn’t find a word and there is none — so it’s not word-like — she’s rational to drop her confidence slightly: ~H(Word)=1/3.
So far this is just like the Standard-Bayesian model. Yet suppose that if she doesn’t find a word but there is one (so the string is word-like), she’s rational to raise her credence slightly—she should suspect it’s word-like: ~H(Word)=2/3.
This yields ambiguous evidence: if she doesn’t find a word, she’s rational to be unsure whether the rational posterior is 1⁄3 or 2/3: ~H(~H(Word)=1/3) > 0 and ~H(~H(Word)=2/3) > 0. (Which one it is depends on whether the string is word-like—but she’s also rational to be unsure of that. There is no cognitive home; see Williamson 2000; Srinivasan 2015.)
At this point a Bayesian can speculate the entire concept of “ambiguous evidence” is rooted in deep confusion.
Rather than talking about whether the clues being “word-like” or not, a concept indistinguishable from the word existing (and therefore useless), and Hailey being “uncertain” of that, we should rather talk about whether Hailey observes “wordlikeness” or not (a concrete observation, even if completely subjective). We could even make it more complicated by having the “wordlikeness” be continuous. But in any case we’d have a clear understanding, at least under this model, of what Hailey’s observation is.
With this powerful mental concept, we can now ask about the Hailey’s prior distribution, not only of the possible observations, but also of the joint distribution of observations AND results (in this case of whether a word exists or not).
Using such concepts, the “ambiguous evidence” concept dissolves itself: ambiguity has to reside on the priors, or on the current beliefs, rather than on future evidence or on how to best interpret it. Given a detailed enough representation of the current beliefs, there can be no justification for predictable polarization or reflection violations of any sort.
We can hypothetise that ambiguity on how to interpret new evidence or update beliefs based on them come from ambiguity on what the observations are, from inconsistent or undetermined previous beliefs, or from errors in aproximation and computation.
Conclusion
This is understandable, and may be a mechanism for predictable polarization in practice. After all, it is very hard to keep a consistent set of current beliefs about all possible evidence you may be subject for, much less for the joint distribution of evidence of things you care about. It is also very hard to update this exponentially large space of beliefs according to the laws of probability; It is inevitable that approximations will be used both to represent both the current beliefs and the updates.
It therefore should not be surprising if, in practice, updates to human beliefs end up violating martingale properties. But rather than accepting such violations as “rational”, one should work to recognize them for what they are: evidence that at some point, present or future, one’s rationality is falling short.
Contra Kevin Dorst’s Rational Polarization
Epistemic status: trying hard to explain, not persuade. Part of me wants to fight the good fight and protect Bayesian orthodoxy against a corrupting heresy.
MIT Professor Kevin Dorst (LessWrong username: kevin-dorst) has a new argument that, contrary to standard Bayesianism, it can be rational to predict in which directions your beliefs will change.
He explicitly argues against the Conservation of Expected Evidence, and applies this argument to a variety of situations, particularly to political polarization.
His new paper has been generating attention, including from renowed figures such as Steven Pinker, and Dorst has been invited on podcasts to defend his position. Dorst also recently made a frontpage post here on this community, named Polarization is Not (Standard) Bayesian, stating similar claims. These claims deserve to be addressed.
Reflection Violations
Here is one of his examples (from the paper):
Let s be a politically-coded belief, e.g. that guns increase safety. You are currently unsure about s, say 50-50, but if you also know that you’re going to study in a liberal college. Students going to liberal colleges are likely to leave with more liberal opinions, and you don’t think you are special regarding that effect. He argues that it can be rational to both hold your current belief regarding s, and to expect your future belief on s to move in the liberal direction.
He calls situations like that reflection violations. Here Dorst gives us another example:
In the LessWrong article, he gives a different example of reflection violations (also called martingale violations), arguing that even if you dont know in which direction your beliefs are likely to move (e.g. in the liberal or conservative direction), there can still be a reflection violation on the correlation of beliefs:
(Supposed) Mechanism
Dorst argues that such reflection violations can derive from rational Bayesian updating. In his theory, this happens whenever beliefs need to be updated based what he calls ambiguous evidence.
He thankfully gives a low-level example of a setting in which his “ambiguous evidence” could appear:
He argues there is an assymetry between finding a word completion and not finding it, that updating based on the former is much easier than on the later, and that this leads to ambiguity.
Here he argues that this assymetry can generate a reflection violation:
Dorst’s Objection to the Standard Response, and the Bayesian Refutation
Dorst correctly identifies the standard reply:
He proceeds to calculations showing that, under this standard approach, which he calls a “model”, no reflection violation occurs. He then objects to the correctness of such “model”.
This objection unfortunately confuses many different concepts. He first confuses a prior over the joint distribution (Word, Found) with a model of how these variables interact. Having a prior over the joint distribution is not the same as having a “model that is always correct”.
Quite the opposite, a good Bayeasian prior is supposed to reflect the uncertainty regarding which models are correct; the prior distribution is what happens when, in theory, you marginalize over all possible models, or in practice, when you use an approximation for that. Nowhere a Bayesian has to place absolute trust in the correctness of any of the models, contrary to what has been implied by Dorst.
Rather, the common intuition of not placing absolutely trust in “models” is used to the effect of arguing that one should have no prior at all over joint distributions of possible evidence. This despite his concession that one can have priors over the simpler variables we may be ultimately interested in.
Of course, priors over joint distributions are nothing but a collection of current beliefs over each possibility in the product space. If you have a current belief for (Word & Found), another for (Word & ~Found), another for (~Word & Found) (maybe close to zero if your search generates little errors), and finally one for (~Word & ~Found), then these beliefs constitute the prior you need. He himself should concede this is possible, as he doesn’t deny the existance of current beliefs; but once you accept having a more complex prior over the joint distribution (including the possible evidence), the evidence is no longer ambiguous, and any supposed reflection violation becomes impossible.
Dorst continues with an intuition:
Of course, the intuition for “betting the farm” is not a good framing for thinking about probabilities. Standard risk-averseness means you probably shouldn’t “bet the farm” either way! It may be particularly painful to “lose a bet” due to “missing out” on an easy word, or one you might think with benefit of hindsight should have been “easy” to spot, but such assymetry of feelings shouldn’t interfere with our careful analysis of probabilities.
Moving forward, the expression “Oh! I should’ve seen that...”. can mean any of the following:
“I should have found it!”, as in “I’m frustrated my word search didn’t work”. I might try to consciently apply negative reward to my neural “word-search” circuits, as if telling them “you did worse than expected, whatever was tried didn’t work”. This is not about rational updates of beliefs based on new evidence, but rather about whether one can reasonably expect some source of evidence (in this case your own word-search skills) to be of higher quality in the future.
“I regret giving such a low probability (1/3) to to a word existing, as it turned out there was one! I wish I could just… erh… have magically known that..., or at least kept my probability near my original 50%, even though the only evidence I had (not finding the word) pointed me to decreasing it.” This is not what productive or rational thought looks like.
Even though I didn’t find the word, I had other clues, additional evidence, that could have predicted that a word existed. “I should have also updated based on these other cues, not only on the fact I didn’t find the word”. This is a perfectly valid point (considering more evidence should improve accuracy!) but is completely unrelated to the matter of how to rationally update beliefs based on the evidence at hand.
More
Ambiguity?Confusion?In the same example, which serves as the main illustration for his crucial mechanism, Dorst continues arguing against the Standard Bayesian interpretation by appealing to additional complexity in the evidence, which in his opinion exemplifies the notion of “ambiguity”.
Here he adds an additional variable: besides there being a completion (Word) and Haley finding it or not (Find), he adds a further evidence-variable (Word-like) denoting whether Haley finds the string to have “subtle hints that it’s completable”.
The example is complicated further by adding that Haley is unsure of whether she observes word-likeness or not. The careful reader starts to feel some force acting to obscure any concrete consideration of what, if anything, is being observed by Haley. We sure can’t have a proper conversation about how to update beliefs based on observations when there is no agreement on what the observations are.
He details his model further (line breaks added by myself for clarity):
At this point a Bayesian can speculate the entire concept of “ambiguous evidence” is rooted in deep confusion.
Rather than talking about whether the clues being “word-like” or not, a concept indistinguishable from the word existing (and therefore useless), and Hailey being “uncertain” of that, we should rather talk about whether Hailey observes “wordlikeness” or not (a concrete observation, even if completely subjective). We could even make it more complicated by having the “wordlikeness” be continuous. But in any case we’d have a clear understanding, at least under this model, of what Hailey’s observation is.
With this powerful mental concept, we can now ask about the Hailey’s prior distribution, not only of the possible observations, but also of the joint distribution of observations AND results (in this case of whether a word exists or not).
Using such concepts, the “ambiguous evidence” concept dissolves itself: ambiguity has to reside on the priors, or on the current beliefs, rather than on future evidence or on how to best interpret it. Given a detailed enough representation of the current beliefs, there can be no justification for predictable polarization or reflection violations of any sort.
We can hypothetise that ambiguity on how to interpret new evidence or update beliefs based on them come from ambiguity on what the observations are, from inconsistent or undetermined previous beliefs, or from errors in aproximation and computation.
Conclusion
This is understandable, and may be a mechanism for predictable polarization in practice. After all, it is very hard to keep a consistent set of current beliefs about all possible evidence you may be subject for, much less for the joint distribution of evidence of things you care about. It is also very hard to update this exponentially large space of beliefs according to the laws of probability; It is inevitable that approximations will be used both to represent both the current beliefs and the updates.
It therefore should not be surprising if, in practice, updates to human beliefs end up violating martingale properties. But rather than accepting such violations as “rational”, one should work to recognize them for what they are: evidence that at some point, present or future, one’s rationality is falling short.