My other comment was about the ambitious side of prediction markets. This will be about the unambitious side, how they don’t have to do much to be better than the status quo.
above technical problems. We are optimistic that these can be overcome.
What problems do you mean, the paragraphs one and three before? That could be clearer. Are you really optimistic, or is this apophasis in which you deniably assert problems? Well, I’m going to talk about them anyway.
additional limitation to prediction markets is that people have to be interested enough to take part in them
Robin Hanson has always said you get what you pay for. If information is valuable to you, pay for it by subsidizing the market. Betting markets aren’t free, but are they cheaper or more accurate than the alternative? Start with things that bettors care about, like politics.
Pope Francis’ next pronouncement
Having a market on his next pronouncement would encourage leaks. I’m not sure whether that would be good or bad. Having a market for the first papal pronouncement of 2021 that closed a year ahead probably wouldn’t produce leaks. Nor would it produce a precise answer, but it would produce some kind of average that might be interesting. For comparison, the Nobels rarely leak, so the markets don’t vary much from year to year. Is it useful to know that Haruki Murakami is usually at the top of the list? Some people are skeptical, though.
When information is not widely distributed or discoverable, prediction markets are not useful. Prediction markets for WMDs
Those are two problems and they apply both to WMDs.
As for discoverability, in 2000 you could have a market over what inspectors would find in a year. You could also have a market over what inspectors would find in a decade. You could imagine a market over what would be the consensus in 2010, but it is more speculative how that would work. In 2002 it would be straightforward to have a conditional market, conditional on invasion. I hope that simply setting up a market would have encouraged precision, such as chemical vs nuclear, stockpiles vs production, and quantity. Such distinctions seem like an easy way to improve the public debate.
As for wide distribution, so what? We want an opinion, even if it is not very certain. In fact, open sources should have been enough to beat the CIA in Iraq. Partly that is because the CIA is incompetent, but partly it is because the CIA is not on our side. I think that open source amateurs have done a pretty good job of predicting the North Korean nuclear missile program. How well did Intrade do in predicting North Korea missile tests in 2006? I don’t know, but they did a lot better than the DOD at postdiction. (In fact, I was somewhat surprised that the administration accepted a lack of WMD in Iraq and did not fabricate them.)
Of course, we don’t have direct access to the territory, only the map. Prediction markets can only be judged by the future map. I am extremely pessimistic about our ability to create a collective map, so I think prediction markets have only a very low bar to clear. From your user name, you sound like a scholastic apologist, whereas I am very cynical about the schools. I don’t dispute that they house expertise, but they abuse that position, by, among many other things simply lying about the consensus in their field. A very simple step forward would be to use surveys to assess consensus. And when fields interact, it is even worse. As I’ve said elsewhere:
1. If you want to make an Appeal to the Authority of the Scientific Consensus, you have to be able to figure out what that consensus is. What does it matter that nutritionists have a good consensus if you don’t know what it is, and instead believe the medical school curriculum? Similarly, your psychology textbook lied to you.
2. It is very common, both in your economics example, and in the Nurture Assumption case, that there is an explicit consensus that method X is better than method Y, but people will just keep using method Y. It seems very fair to me to describe that as an implicit consensus that method Y is good enough. Moreover, it is common that the consensus accepts the aggregate results of the inferior method, just because of the volume of publications; and thus the explicit consensus on the object level is made by methods that violate the explicit consensus of methodology. (Also, most of the replication crisis details were discussed at length by leading psychologist Paul Meehl fifty years ago. Everyone abased themselves in shame and proceeded to change nothing. Is this time different?)
3. You might say that (1) is a special case of (2). Everyone accepts that you should look to experts with good methods, but they don’t actually pay attention to nutritionists. I think that it is fair to call the exclusion of nutritionists “the scientific consensus.”
A lot of information technologies provide value simply by creating conflict. The internet makes it easy to find people who disagree with you, if you want to. Wikipedia provides a focal point for disagreeing parties to fight over, forcing both sides to acknowledge the other’s existence, making it easy for ignorant amateurs to notice the breakdown of consensus. Similarly, prediction markets provide opportunity for disagreement on a more fine-grained level.
And let me close with a less hostile, more amusing example of lack of consensus.
Your points are well-taken. And thanks for pointing out the ambiguity about what problems can be overcome. I will clarify that to something more like “problems like x and y can be overcome by subsidizing markets and ensuring the right incentives are in place for the right types of information to be brought to light.”
I had already retitled this section in my doc (much expanded and clarified) ‘Do Prediction Markets Help Reveal The Map?’ which is a much more exact title, I think.
I am curious about what you mean by create ‘a collective map’, if you mean achieve localized shared understanding of the world, individual fields of inquiry do it with some success. But if you mean to create collective knowledge broad enough that 95% of people share the same models of reality, you are right, forget it. There’s just too much difference among the way communities think.
As for the 14th c. John Buridan, the interesting thing about him is that he refused to join one of the schools and instead remained an Arts Master all his life, specializing in philosophy and the application of logic to resolve endless disputes in different subjects. At the time people were expected to join one the religious orders and become a Doctor of Theology. He carved out a more neutral space away from those disputations and refined the use logic to tackle problems in natural philosophy and psychology.
What do I mean by “pessimistic about our ability to create a collective map”? Maybe I should not have said “pessimistic,” but instead used “cynical.” There are lots of places where we claim to have consensus and I think that those claims are false. I gave lots of examples of very small scale failures to communicate, like one department of a medical school lying about the work of another medical department. If we revere certain people as experts, it behooves us to find out what they claim. Finding that out would count as promoting a collective map.
My other comment was about the ambitious side of prediction markets. This will be about the unambitious side, how they don’t have to do much to be better than the status quo.
What problems do you mean, the paragraphs one and three before? That could be clearer. Are you really optimistic, or is this apophasis in which you deniably assert problems? Well, I’m going to talk about them anyway.
Robin Hanson has always said you get what you pay for. If information is valuable to you, pay for it by subsidizing the market. Betting markets aren’t free, but are they cheaper or more accurate than the alternative? Start with things that bettors care about, like politics.
Having a market on his next pronouncement would encourage leaks. I’m not sure whether that would be good or bad. Having a market for the first papal pronouncement of 2021 that closed a year ahead probably wouldn’t produce leaks. Nor would it produce a precise answer, but it would produce some kind of average that might be interesting. For comparison, the Nobels rarely leak, so the markets don’t vary much from year to year. Is it useful to know that Haruki Murakami is usually at the top of the list? Some people are skeptical, though.
Those are two problems and they apply both to WMDs.
As for discoverability, in 2000 you could have a market over what inspectors would find in a year. You could also have a market over what inspectors would find in a decade. You could imagine a market over what would be the consensus in 2010, but it is more speculative how that would work. In 2002 it would be straightforward to have a conditional market, conditional on invasion. I hope that simply setting up a market would have encouraged precision, such as chemical vs nuclear, stockpiles vs production, and quantity. Such distinctions seem like an easy way to improve the public debate.
As for wide distribution, so what? We want an opinion, even if it is not very certain. In fact, open sources should have been enough to beat the CIA in Iraq. Partly that is because the CIA is incompetent, but partly it is because the CIA is not on our side. I think that open source amateurs have done a pretty good job of predicting the North Korean nuclear missile program. How well did Intrade do in predicting North Korea missile tests in 2006? I don’t know, but they did a lot better than the DOD at postdiction. (In fact, I was somewhat surprised that the administration accepted a lack of WMD in Iraq and did not fabricate them.)
Of course, we don’t have direct access to the territory, only the map. Prediction markets can only be judged by the future map. I am extremely pessimistic about our ability to create a collective map, so I think prediction markets have only a very low bar to clear. From your user name, you sound like a scholastic apologist, whereas I am very cynical about the schools. I don’t dispute that they house expertise, but they abuse that position, by, among many other things simply lying about the consensus in their field. A very simple step forward would be to use surveys to assess consensus. And when fields interact, it is even worse. As I’ve said elsewhere:
A lot of information technologies provide value simply by creating conflict. The internet makes it easy to find people who disagree with you, if you want to. Wikipedia provides a focal point for disagreeing parties to fight over, forcing both sides to acknowledge the other’s existence, making it easy for ignorant amateurs to notice the breakdown of consensus. Similarly, prediction markets provide opportunity for disagreement on a more fine-grained level.
And let me close with a less hostile, more amusing example of lack of consensus.
Your points are well-taken. And thanks for pointing out the ambiguity about what problems can be overcome. I will clarify that to something more like “problems like x and y can be overcome by subsidizing markets and ensuring the right incentives are in place for the right types of information to be brought to light.”
I had already retitled this section in my doc (much expanded and clarified) ‘Do Prediction Markets Help Reveal The Map?’ which is a much more exact title, I think.
I am curious about what you mean by create ‘a collective map’, if you mean achieve localized shared understanding of the world, individual fields of inquiry do it with some success. But if you mean to create collective knowledge broad enough that 95% of people share the same models of reality, you are right, forget it. There’s just too much difference among the way communities think.
As for the 14th c. John Buridan, the interesting thing about him is that he refused to join one of the schools and instead remained an Arts Master all his life, specializing in philosophy and the application of logic to resolve endless disputes in different subjects. At the time people were expected to join one the religious orders and become a Doctor of Theology. He carved out a more neutral space away from those disputations and refined the use logic to tackle problems in natural philosophy and psychology.
What do I mean by “pessimistic about our ability to create a collective map”? Maybe I should not have said “pessimistic,” but instead used “cynical.” There are lots of places where we claim to have consensus and I think that those claims are false. I gave lots of examples of very small scale failures to communicate, like one department of a medical school lying about the work of another medical department. If we revere certain people as experts, it behooves us to find out what they claim. Finding that out would count as promoting a collective map.