There’s a lot in this post that I agree with, but in the spirit of the advice in this post, I’ll focus on where I disagree:
If you are moving closer to truth—if you are seeking available information and updating on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
But this can’t be right. To see why, substitute “making money on prediction markets” for “moving closer to truth”, “betting” for “updating”, and “trying to make money on prediction markets” for “seeking truth”:
If you are making money on prediction markets—if you are seeking available information and betting on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also trying to make money on prediction markets.
But the only way to make money on prediction markets is by correcting mispricings, which necessarily entails moving away from agreement from the consensus market price. (As it is written, not every change is an improvement, but every improvement is necessarily a change.)
Before thinking about prediction markets, let’s imagine a scenario where type-A agents are trying to figure out the properties of the tiles on the floor, and type-B agents aren’t; maybe they’re treating the properties of the tiles as an infohazard, or trying to get a politically correct answer, or just don’t care, etc. In this case, although they start out with a wide distribution over tile properties, type-A agents will tend to get similar answers even without communicating (by looking at the tiles), and will get even more similar answers if they do communicate. So Duncan’s original statement seems correct in this case.
With respect to prediction markets, the rephrased statement also seems true. People who are trying to make money on prediction markets will, even though they disagree with each other, each bet against obvious falsehoods in the market prices. They will therefore end up in a “correct contrarian cluster” which differs from the general trader distribution in the direction of the obvious pricing corrections. The traders trying to make money will move away from agreement with consensus market prices, but will move towards agreement with each other, as they notice the same obvious mispricings.
I suppose if the traders all started out with the consensus market prices as their credences, then correcting the market would almost necessarily involve at least temporarily having higher variance in one’s credences, so would look like disagreement compared to the initial state. However, the initial market prices, as in the tile case, would tend to represent wide, uninformative distributions; the agents trying to make money would over time develop more specific beliefs, reaching more substantive agreement than they had initially. It’s like the difference between agreeing with someone that there’s a 50% chance a coin will turn up heads, and agreeing with someone that there’s a 99% chance that a coin will turn up heads; the second agreement is more substantive even if there is agreement about probabilities in both cases.
the agents trying to make money would over time develop more specific beliefs, reaching more substantive agreement than they had initially. It’s like the difference between agreeing with someone that there’s a 50% chance a coin will turn up heads, and agreeing with someone that there’s a 99% chance that a coin will turn up heads; the second agreement is more substantive even if there is agreement about probabilities in both cases
In Popperian epistemology, it’s a virtue to propose hypotheses that are easily disproven...which isn’t the same thing as always incrementally moving towards truth: it’s more like babble-and-prune. Of course, the instruction to converge on truth doesnt quite say “get closer to truth in every step—no backtracking”—it’s just that Bayesians are likely to take it that way.
And of course, epistemology is unsolved. No one can distill the correct theoretical epistemology into practical steps, because no one knows what it is ITFP.
There’s a lot in this post that I agree with, but in the spirit of the advice in this post, I’ll focus on where I disagree:
Before thinking about prediction markets, let’s imagine a scenario where type-A agents are trying to figure out the properties of the tiles on the floor, and type-B agents aren’t; maybe they’re treating the properties of the tiles as an infohazard, or trying to get a politically correct answer, or just don’t care, etc. In this case, although they start out with a wide distribution over tile properties, type-A agents will tend to get similar answers even without communicating (by looking at the tiles), and will get even more similar answers if they do communicate. So Duncan’s original statement seems correct in this case.
With respect to prediction markets, the rephrased statement also seems true. People who are trying to make money on prediction markets will, even though they disagree with each other, each bet against obvious falsehoods in the market prices. They will therefore end up in a “correct contrarian cluster” which differs from the general trader distribution in the direction of the obvious pricing corrections. The traders trying to make money will move away from agreement with consensus market prices, but will move towards agreement with each other, as they notice the same obvious mispricings.
I suppose if the traders all started out with the consensus market prices as their credences, then correcting the market would almost necessarily involve at least temporarily having higher variance in one’s credences, so would look like disagreement compared to the initial state. However, the initial market prices, as in the tile case, would tend to represent wide, uninformative distributions; the agents trying to make money would over time develop more specific beliefs, reaching more substantive agreement than they had initially. It’s like the difference between agreeing with someone that there’s a 50% chance a coin will turn up heads, and agreeing with someone that there’s a 99% chance that a coin will turn up heads; the second agreement is more substantive even if there is agreement about probabilities in both cases.
In Popperian epistemology, it’s a virtue to propose hypotheses that are easily disproven...which isn’t the same thing as always incrementally moving towards truth: it’s more like babble-and-prune. Of course, the instruction to converge on truth doesnt quite say “get closer to truth in every step—no backtracking”—it’s just that Bayesians are likely to take it that way.
And of course, epistemology is unsolved. No one can distill the correct theoretical epistemology into practical steps, because no one knows what it is ITFP.