Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that’s because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can’t solve the public goods problem due to large transaction costs.
Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?
Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?