My goal here is to quickly describe a potential problem that groups of people may have when trying to collectively figure out the truth of some subject.
The basic idea is that people may use each others’ beliefs to inform their own beliefs, without properly accounting for the fact that others’ beliefs are based on some of the same evidence as their own beliefs.
In what follows I present a simplified example of how this might happen, a bit of math to guide our thinking, and some illustrative plots.
The basic takeaway is that a failure to “de-correlate” others’ beliefs from the supporting evidence leads to sub-optimal epistemics. In particular:
Groups that make this error may be overconfident in their beliefs.
This can make it harder to recover accurate beliefs if the first people to explore a topic reach conclusions far from the truth.
An illustrative example
Suppose that Alice is trying to figure out whether the following claim is true:
AGI will be developed sometime in the next 20 years.
Alice starts with some prior, and after investigating this claim updates her belief to be more confident that it is true.
Later, Bob comes along and wants to know whether this claim is true. He investigates the claim, using, for the most part, the same sources of information that Alice used, and arrives at a belief similar to Alice’s posterior. He then notes that Alice also updated her belief to be more confident of the claim’s truth, and uses that as additional evidence; he thus ends up even more confident than Alice that AGI will be developed in the next 20 years.
Bob’s last step is of course an error: if Alice and Bob used the same information to form their beliefs, Bob shouldn’t use Alice’s belief to inform his own belief—the information expressed through Alice’s posterior belief is the same as the information in the evidence that Bob already reviewed. Bob needs to “de-correlate” the information he sees from the information in Alice’s belief—in this case, he will find that the two sources of information are perfectly correlated, meaning that they are redundant.
We probably aren’t so naive as to completely fail to decorrelate the information in other people’s beliefs from what we see; but it seems likely that we make this mistake to some extent. We may, therefore, want to be a bit skeptical of common beliefs formed in a group where raw evidence is scarce relative to the discussion about how to interpret that evidence. (The claim about AGI was chosen intentionally, with this in mind.)
A model
It might help to think about how this might happen in a more formal way. If you don’t like math, feel free to skip ahead.
Let’s suppose that we have n people who want to figure out the proper level of credence, p, for some claim. As a prior, each person has that p is uniformly distributed between 0 and 1.
Now suppose that there are n pieces of independent evidence, θ={θ1,θ2,…,θn} drawn from Binom(p,n). (Each θi has a probability p of being 1 and a probability 1−p of being 0.)
In order, each player i receives the signal θi and reviews the beliefs of everyone who already received their signals. (Player i observes signal θi and posteriors of players 1,2,…,i−1.)
Denoting the common prior as π(p)=Beta(1,1), we can calculate using Bayes’ theorem that player 1, upon observing θ1∈{0,1} forms the posterior
π(p|θ1)∝f(θ1|p)π(p)=θ1p+(1−θ1)(1−p)={p1(1−p)0,θ=1p0(1−p)1,θ=0π(p|θ1)={Beta(2,1),θ1=1Beta(1,2),θ1=0
In general, if Player n properly de-correlates previous signals, they should end up with
π(p|θ)=Beta(1+∑iθi,1+∑i(1−θi))
The danger is that players repeat-count earlier signals: e.g., player 3 sees beliefs of players 1 and 2 and thinks the two beliefs are based on independent evidence, when in reality player 2 uses the evidence given to player 1. If players (wrongly) assume complete independence of beliefs, player n ends up with
π(p|^θ)=Beta(1+∑i(n−i+1)θi,1+∑i(n−i+1)(1−θi))
Again, the basic idea is that earlier signals get too much attention.
Some simulations
Below are two plots generated using the model presented in the previous section.
The first plot shows the posteriors that result from a group of 10 people who receive independent evidence. The correct belief in this example is p=0.5.
You can see that that incorrectly updating beliefs yield a posterior that is too narrow—it confidently predicts the wrong value.
The second plot shows a case where p=0.9, but we start with a train of unlikely (possibly mistaken) observations in the other direction.
Using the wrong updating method means that we put too much weight on those initial observations and move back toward the correct belief more slowly.
A problem with group epistemics
Link post
(Cross-posted from my site.)
My goal here is to quickly describe a potential problem that groups of people may have when trying to collectively figure out the truth of some subject.
The basic idea is that people may use each others’ beliefs to inform their own beliefs, without properly accounting for the fact that others’ beliefs are based on some of the same evidence as their own beliefs.
In what follows I present a simplified example of how this might happen, a bit of math to guide our thinking, and some illustrative plots.
The basic takeaway is that a failure to “de-correlate” others’ beliefs from the supporting evidence leads to sub-optimal epistemics. In particular:
Groups that make this error may be overconfident in their beliefs.
This can make it harder to recover accurate beliefs if the first people to explore a topic reach conclusions far from the truth.
An illustrative example
Suppose that Alice is trying to figure out whether the following claim is true:
Alice starts with some prior, and after investigating this claim updates her belief to be more confident that it is true.
Later, Bob comes along and wants to know whether this claim is true. He investigates the claim, using, for the most part, the same sources of information that Alice used, and arrives at a belief similar to Alice’s posterior. He then notes that Alice also updated her belief to be more confident of the claim’s truth, and uses that as additional evidence; he thus ends up even more confident than Alice that AGI will be developed in the next 20 years.
Bob’s last step is of course an error: if Alice and Bob used the same information to form their beliefs, Bob shouldn’t use Alice’s belief to inform his own belief—the information expressed through Alice’s posterior belief is the same as the information in the evidence that Bob already reviewed. Bob needs to “de-correlate” the information he sees from the information in Alice’s belief—in this case, he will find that the two sources of information are perfectly correlated, meaning that they are redundant.
We probably aren’t so naive as to completely fail to decorrelate the information in other people’s beliefs from what we see; but it seems likely that we make this mistake to some extent. We may, therefore, want to be a bit skeptical of common beliefs formed in a group where raw evidence is scarce relative to the discussion about how to interpret that evidence. (The claim about AGI was chosen intentionally, with this in mind.)
A model
It might help to think about how this might happen in a more formal way. If you don’t like math, feel free to skip ahead.
Let’s suppose that we have n people who want to figure out the proper level of credence, p, for some claim. As a prior, each person has that p is uniformly distributed between 0 and 1.
Now suppose that there are n pieces of independent evidence, θ={θ1,θ2,…,θn} drawn from Binom(p,n). (Each θi has a probability p of being 1 and a probability 1−p of being 0.)
In order, each player i receives the signal θi and reviews the beliefs of everyone who already received their signals. (Player i observes signal θi and posteriors of players 1,2,…,i−1.)
Denoting the common prior as π(p)=Beta(1,1), we can calculate using Bayes’ theorem that player 1, upon observing θ1∈{0,1} forms the posterior π(p|θ1)∝f(θ1|p)π(p)=θ1p+(1−θ1)(1−p)={p1(1−p)0,θ=1p0(1−p)1,θ=0 π(p|θ1)={Beta(2,1),θ1=1Beta(1,2),θ1=0
In general, if Player n properly de-correlates previous signals, they should end up with π(p|θ)=Beta(1+∑iθi,1+∑i(1−θi))
The danger is that players repeat-count earlier signals: e.g., player 3 sees beliefs of players 1 and 2 and thinks the two beliefs are based on independent evidence, when in reality player 2 uses the evidence given to player 1. If players (wrongly) assume complete independence of beliefs, player n ends up with π(p|^θ)=Beta(1+∑i(n−i+1)θi,1+∑i(n−i+1)(1−θi))
Again, the basic idea is that earlier signals get too much attention.
Some simulations
Below are two plots generated using the model presented in the previous section.
The first plot shows the posteriors that result from a group of 10 people who receive independent evidence. The correct belief in this example is p=0.5.
You can see that that incorrectly updating beliefs yield a posterior that is too narrow—it confidently predicts the wrong value.
The second plot shows a case where p=0.9, but we start with a train of unlikely (possibly mistaken) observations in the other direction.
Using the wrong updating method means that we put too much weight on those initial observations and move back toward the correct belief more slowly.