Prediction Markets are Confounded—Implications for the feasibility of Futarchy
(tl;dr: In this post, I show that prediction markets estimate non-causal probabilities, and can therefore not be used for decision making by rational agents following causal decision theory. I provide an example of a simple situation where such confounding leads to a society which has implemented futarchy making an incorrect decision)
It is October 2016, and the US Presidential Elections are nearing. The most powerful nation on earth is about to make a momentous decision about whether being the brother of a former president is a more impressive qualification than being the wife of a former president. However, one additional criterion has recently become relevant in light of current affairs: Kim Jong-Un, Great Leader of the Glorious Nation of North Korea, is making noise about his deep hatred for Hillary Clinton. He also occasionally discusses the possibility of nuking a major US city. The US electorate, desperate to avoid being nuked, have come up with an ingenious plan: They set up a prediction market to determine whether electing Hillary will impact the probability of a nuclear attack.
The following rules are stipulated: There are four possible outcomes, either “Hillary elected and US Nuked”, “Hillary elected and US not nuked”, “Jeb elected and US nuked”, “Jeb elected and US not nuked”. Participants in the market can buy and sell contracts for each of those outcomes, the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0
Simultaneously in a country far, far away, a rebellion is brewing against the Great Leader. The potential challenger not only appears not to have no problem with Hillary, he also seems like a reasonable guy who would be unlikely to use nuclear weapons. It is generally believed that the challenger will take power with probability 3⁄7; and will be exposed and tortured in a forced labor camp for the rest of his miserable life with probability 4⁄7. Let us stipulate that this information is known to all participants - I am adding this clause in order to demonstrate that this argument does not rely on unknown information or information asymmetry.
A mysterious but trustworthy agent named “Laplace’s Demon” has recently appeared, and informed everyone that, to a first approximation, the world is currently in one of seven possible quantum states. The Demon, being a perfect Bayesian reasoner with Solomonoff Priors, has determined that each of these states should be assigned probability 1⁄7. Knowledge of which state we are in will perfectly predict the future, with one important exception: It is possible for the US electorate to “Intervene” by changing whether Clinton or Bush is elected. This will then cause a ripple effect into all future events that depend on which candidate is elected President, but otherwise change nothing.
The Demon swears up and down that the choice about whether Hillary or Jeb is elected has absolutely no impact in any of the seven possible quantum states. However, because the Prediction market has already been set up and there are powerful people with vested interests, it is decided to run the market anyways.
Roughly, the demon tells you that the world is in one of the following seven states:
State |
Kim overthrown |
Election winner (if no intervention) |
US Nuked if Hillary elected |
US Nuked if Jeb elected |
US Nuked |
1 |
No |
Hillary |
Yes |
Yes |
Yes |
2 |
No |
Hillary |
No |
No |
No |
3 |
No |
Jeb |
Yes |
Yes |
Yes |
4 |
No |
Jeb |
No |
No |
No |
5 |
Yes |
Hillary |
No |
No |
No |
6 |
Yes |
Jeb |
No |
No |
No |
7 |
Yes |
Jeb |
No |
No |
No |
Let us use this table to define some probabilities: If one intervenes to make Hillary win the election, the probability of the US being nuked is 2⁄7 (this is seen from column 4). If one intervenes to make Jeb win the election, the probability of the US being nuked is 2⁄7 (this is seen from column 5). In the language of causal inference, these probabilities are Pr (Nuked| Do (Elect Clinton)] and Pr[Nuked | Do(Elect Bush)]. The fact that these two quantities are equal confirms the Demon’s claim that the choice of President has no effect on the outcome. An agent operating under Causal Decision theory will use this information to correctly conclude that he has no preference about whether to elect Hillary or Jeb.
However, if one were to condition on who actually was elected, we get different numbers: Conditional on being in a state where Hillary is elected, the probability of the US being nuked is 1⁄3; whereas conditional on being in a state where Jeb is elected, the probability of being nuked is ¼. Mathematically, these probabilities are Pr [Nuked | Clinton Elected] and Pr[Nuked | Bush Elected]. An agent operating under Evidentiary Decision theory will use this information to conclude that he will vote for Bush. Because evidentiary decision theory is wrong, he will fail to optimize for the outcome he is interested in.
Now, let us ask ourselves which probabilities our prediction markets will converge to, ie which probabilities participants in the market have an incentive to provide their best estimate of. We defined our contract as “Hillary is elected and the US is nuked”. The probability of this occurring in 1⁄7; if we normalize by dividing by the marginal probability that Hillary is elected, we get 1⁄3 which is equal to Pr [Nuked | Clinton Elected]. In other words, the prediction market estimates the wrong quantities.
Essentially, what happens is structurally the same phenomenon as confounding in epidemiologic studies: There was a common cause of Hillary being elected and the US being nuked. This common cause—whether Kim Jong-Un was still Great Leader of North Korea—led to a correlation between the election of Hillary and the outcome, but that correlation is purely non-causal and not relevant to a rational decision maker.
The obvious next question is whether there exists a way to save futarchy; ie any way to give traders an incentive to pay a price that reflects their beliefs about Pr (Nuked| Do (Elect Clinton)] instead of Pr [Nuked | Clinton Elected]). We discussed this question at the Less Wrong Meetup in Boston a couple of months ago. The only way we agreed will definitely solve the problem is the following procedure:
The governing body makes an absolute pre-commitment that no matter what happens, the next President will be determined solely on the basis of the prediction market
The following contracts are listed: “The US is nuked if Hillary is elected” and “The US is nuked if Jeb is elected”
At the pre-specified date, the markets are closed and the President is chosen based on the estimated probabilities
If Hillary is chosen, the contract on Jeb cannot be settled, and all bets are reversed.
The Hillary contract is expired when it is known whether Kim Jong-Un presses the button.
This procedure will get the correct results in theory, but it has the following practical problems: It allows maximizing on only one outcome metric (because one cannot precommit to choose the President based on criteria that could potentially be inconsistent with each other). Moreover, it requires the reversal of trades, which will be problematic if people who won money on the Jeb contract have withdrawn their winnings from the exchange.
The only other option I can think of in order to obtain causal information from a prediction market is to “control for confounding”. If, for instance, the only confounder is whether Kim Jong-Un is overthrown, we can control for it by using Do-Calculus to show that Pr (Nuked| Do (Elect Clinton)] = Pr (Nuked| (Clinton elected, Kim Overthrown)* Pr (Kim Overthrown) + Pr (Nuked| (Clinton elected, Kim Not Overthrown)* Pr (Kim Not Overthrown). All of these quantities can be estimated from separate prediction markets.
However, this is problematic for several reasons:
There will be an exponential explosion in the number of required prediction markets, and each of them will ask participants to bet on complicated conditional probabilities that have no obvious causal interpretation.
There may be disagreement on what the confounders are, which will lead to contested contract interpretations.
The expert consensus on what the important confounders are may change during the lifetime of the contract, which will require the entire thing to be relisted. Etc. For practical reasons, therefore, this approach does not seem feasible.
I’d like a discussion on the following questions: Are there any other ways to list a contract that gives market participants an incentive to aggregate information on causal quantities? If not, is futarchy doomed?
(Thanks to the Less Wrong meetup in Boston and particularly Jimrandomh for clarifying my thinking on this issue)
- 11 Oct 2022 17:14 UTC; 4 points) 's comment on Prediction market does not imply causation by (EA Forum;
- 7 Apr 2015 17:35 UTC; 2 points) 's comment on Futarchy and Unfriendly AI by (
Anders, thanks for this post, I will think about this more.
By exponential explosion here, do you mean the usual issues with marginalizing in Bayesian networks of high treewidth? If so, this paper I wrote (UAI-2011) with some folks claims it is possible to avoid this issue in “appropriately sparse” causal graphs (by also potentially exploiting Verma constraints in the same way variable elimination exploits conditional independences):
http://arxiv.org/pdf/1202.3763.pdf
If we want to get non-identifiable causal quantities from a joint distribution Futarchy gets us, then we will fail, but I am not sure this is Futarchy’s fault—it is a method of aggregating info about a joint distribution, nothing more and nothing less.
If we want to get an identifiable causal quantity, and if the above is the kind of exponential explosion you meant, then the issue is merely computational for identifiable causal quantities (and also remains for non-causal quantities if the graph has a large enough treewidth / there aren’t enough independences to exploit). Futarchy isn’t magic, it can’t solve NP-complete problems for free.
If the graph is sparse, then either we equate futarchy-estimated probabilities with causal ones by design (by e.g. asking about randomized trials), or just treat Futarchy as a module that does “statistical inference” (filling in probabilities in a table by aggregating previously unaggregated info) for us, and then pay a polynomial cost to get the functionals we really want. Either way, we are not doomed (???).
Thank you for the great comments Ilya!
I hope I didn’t abuse terminology when I used the term “exponential explosion”. I should have double checked the use of this term with someone more technical than me. What I meant was essentially the “curse of dimensionality” - the number of required prediction markets grows exponentially with the number of confounders.
I see this as a real problem because almost all those conditional prediction markets would have to be reversed with all the trades unwound. This makes it very hard to argue that the markets are worth the time of the participants.
Asking about randomized trials is an interesting idea, but those markets will be hard to settle without actually running the randomized trial. Also, when using futarchy to make political decisions, the questions we want answers to are often one-off events at the aggregate country level, which makes it very hard to run a trial.
Anders, what I meant is your Kim/Hillary example has a graph that looks like this:
A → Y ← C → A, and you want p(Y | do(a)). Your point is that your problem grows exponentially with the statespace of C.
Imagine instead that you had a much more complicated word example with a graph in Fig. 7 in the paper I linked (where a bunch of confounders are not observed and are arbitrarily complicated. In fact instead of x1, …, x5, imagine it was a graph of length k: x1, …, xk. And you want p(xk | do(xk-2)). Then my claim is the problem is not exponential size/time in the statespace of those confounders, OR in k, but in fact of constant size/time.
Although I am not entirely sure how to ask a prediction market for the right parameters directly… this is probably an open problem.
I believe this has already been addressed by Robin Hanson. See http://www.overcomingbias.com/2008/01/presidential-de.html and http://www.overcomingbias.com/2011/11/conditional-close-election-markets.html
Don’t take this the wrong way, but I’m actually slightly shocked that no one at the meetup knew about this, and am revising my estimate of the average quality of the meetups downwards (but not by too much). I know almost nothing about futarchy, have read maybe 5-10 posts on it ever, and remembered this.
(If I’ve misunderstood your point, do tell me.)
This is a more general problem than Robin’s example (but I don’t think it’s a serious issue, even then, see below).
Thanks for those links—I hope I didn’t misrepresent the meetup, it is certainly possible that some of them had read this- I have a vague memory of having seen these posts before and I definitely should have discussed it in my post. It is good to know Robin has thought about these issues. His approach seems like it is inspired by regression discontinuity. I will think about whether this works and how general the approach is.
Is there a summary of the timeline in this example? In particular, when do we know if Kim is overthrown? In order for it to be the confounder you describe, we must know before the election—but then simply conditioning on the election result gives the same chance of being nuked in either case (1/2 if kim is still in power and 0 otherwise).
Maybe I’m not following, but the example is not intuitive to me, and seems contrived.
Defining confounders in general is tricky, I won’t attempt to do that. However, I will provide an example of the most relevant timeline where this problem occurs:
You are trading the prediction market at time point t. You expect that at some future timepoint k>t, some event (the confounder) will occur which will affect your estimate of both the probability of the decision at time l>k and the outcome at time m>l. In such a situation, you will bet on the conditional prediction markets based on your beliefs about conditional probabilities, not based on your beliefs about causal quantities.
I provided a simple example where Kim being in power was the only confounder. It was designed such that simply conditioning on Kim being in power would give the correct answer. This was just to illustrate the problem; in real life, it will be more complicated because not everyone will agree on what the confounders are and what needs to be conditioned on.
Why are we told that Kim has been saying how much he hates Hillary if the probability of the US being nuked is the same whether Hillary or Jeb is elected (conditional on Kim being in power)? And why would the probability of Hillary being elected go up if Kim is still in power, in this situation? Even if the actual probability of Kim nuking is the same whether Hillary or Jeb is in office, his statements should lead us to believe otherwise. (I realize the latter has no effect on the analysis—if we switch the probabilities of jeb and hillary being elected in the ‘overthrow’ case, the conclusion remains essentially the same—but I feel it bears pointing out).
Perhaps the reason this example is so counter intuitive is because the quantitative probabilities given do not match the qualitative set-up. The example just seems rather contrived, though I’m having trouble putting my finger on why (other than the above apparent contradiction). For now, the best I can come up with is “the demon has done what the prediction market is supposed to do (that is, evaluate the probabilities of Kim being overthrown or not, and of each candidate winning in each case) and everyone is ignoring it for no particular reason, except that some outside observers are using that information to evaluate [the market’s choice in absence of that information].” But perhaps I’m misunderstanding something?
sorry if this post seems somewhat scattered—I just sort of wrote what I thought of as I thought of it.
The reason I told you that he hates Hillary, is that I wanted to establish Kim as a common cause of the election results and whether the US is nuked. I agree that this should have been stated more clearly. I also agree that the probability of Hillary being elected should probably go up if Kim is still in power (though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). I will update the post tomorrow to make this clearer.
In the absence of other information, the setup would lead us to believe that Hillary being elected would increase the probability of a nuclear attack. However, one of the reasons I introduced Laplace’s demon was to make it clear that this is not the case, and to make it clear that all market participants are in agreement on this issue.
If you have a proposed estimator of a causal quantity and you are wondering if it is valid, one simple sanity check is to assume the causal null hypothesis and find out whether the estimator will be zero. If it is not necessarily zero, the estimator is biased. That is essentially what I tried to do with this example.
In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.
“(though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). ”
Yeah, sure, that works too, though then the ‘threat of being nuked’ seems like a red herring.
“In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.”
That makes a lot more sense than the original post, IMO. I’m still trying to process it entirely though, and figure out how useful such an example is. Thanks for your responses.
A short summary at the top would be useful.
I’m a confused layman.
I’m not familiar with the difference between Do(Elect Clinton) and (Clinton Elected). Looking at the numbers, it seems that the first one is “Clinton is elected, either because that was going to happen anyway or because of an intervention”, and the second one is “there was no intervention, and Clinton was elected”? (Or if there was an intervention, it was an unnecessary pro-Clinton one.)
It seems like if we’re talking about who was actually elected, then we should take into account an intervention, if there was one, and these numbers should both be 2/7?
Isn’t this only if there’s no intervention?
I understood it like this: there is a group of people who will commit to changing the election somehow iff the prediction market says it would be good. If the market is neutral, then whoever gets more votes will win.
I found this example very confusing, but perhaps it was confusing in a good way because I had to spend quite a while to grasp exactly what the argument was.
Unless I am still misunderstanding your argument, the second market proposal you make is exactly what Hanson’s solution to this is, and it’s mentioned on the page for futarchy here:
I think your original conjunctive market is straightforwardly a bad idea, as you were able to determine. So now we are left with your practical problems.
It allows only maximizing on one outcome metric, but that outcome metric can be in theory as complex as you want. If you could encode your entire utility function into the outcome metric then that’s not a big deal. More concretely, for likely uses of futarchy any outcome measure would be better than the status quo, because right now, for example, project funding is determined by the whims of politicians to curry favor with an uneducated electorate. How many stadiums would be built in cities if they had to pass the crucible of actually having economic actors believe in the supposed economic gains the stadium would bring to the city? (Being nuked is likely to effect many summary measures that could plausibly be used to elect presidents, like GDP + some measure of social well-being)
Reversal of trades is not problematic, at least for the way you are thinking it is. Sports betting establishments call off trades all the time, such as when a game is cancelled due to inclement weather. People who are able to withdraw money have already made their contribution to the market, and when they sold their contracts someone else bought it and still has it open. The only reason this would seem bad is because of some sense of cosmic justice, where one doesn’t like people who are wrong to make money or something. Their are more realistic concerns circling around this one regarding liquidity, opportunity costs for holding trades, and margin accounts, but they are largely open issues until we have more prediction markets.
This looks like a restatement of the original market I posted? The choice of president is the “proposed policy” and the nuclear attack is the measure of “national welfare”. The fact that he uses causal language (“increase”) in this setting is unfortunate. My first proposal did not have called-off bets, but this is not itself sufficient to solve the problem.
I will have to think more about whether trade reversal, it is possible that you are right and that it is not necessary.
For the moment, I will just note that I did not suggest trade reversal for reasons of fairness; the reason is that this reversal procedure is necessary in order to incentivize participants to trade on their beliefs about causal quantities. If traders believe that the prediction market on Jeb will be reversed if Hillary is chosen, then the prices they are willing to pay will only reflect their beliefs about what happens if Jeb is chosen (assuming that the only cause of who is elected is the predicton market, which is why I insisted on precommitment). The alternative to reversing the trades is to expire the contract at the last traded value, but if traders expect this, the price they are willing to pay may become entangled with their beliefs about who will be chosen (which may be affected by confounders)
That was the logic behind it, but please give me a couple of days to think about whether it is correct
His conditional bets include called-off bets, what you call reversal of trades, which is why I thought what he is talking about corresponds to your second market proposal. Your first market proposal doesn’t have called-off bets. Is there some part of your statistical objection which isn’t solved by called-off bets?
On further thought, I think the reversal of trades going back to all the traders in the market may be necessary, while I was only thinking that a reversal of outstanding contracts would be necessary. That could lead to losses to the exchange if, for example, the only remaining holders had acquired their contracts at expensive times, so the $100 that exists to settle both contracts is insufficient to reverse both trades. As alternatives, the exchange could (1) disallow withdrawals until the contract is fully settled, (2) subsidize those losses through trading fees, or (3) only pay out reversals pro rata to the extent funds are available. The first wouldn’t distort the market but may affect liquidity. In general I’m skeptical of proposals that have extremely long time frames, but Hanson seems to want contracts that are decades long in some places, as I recall. The second would distort the market a little bit because of the fees, but it’s likely to be by a manageable amount. The third would probably distort the market the most, especially as it got to the decision time, and liquidity might dry up if there was a perception that the current holders had all gotten in at high prices. I think in general Hanson sees prediction markets as being subsidized markets, which 2 is most in-line with, perhaps without even the trading fees depending on the magnitude of the subsidy.
I see where you are coming from, in fact this used to be my position: This post was inspired by a talk I gave at the Less Wrong Meetup, where I made the claim that the causality problem is solved by called-off bets (reversals). Jimrandomh called me on it, and he was right: There will still be confounding even when bets are called off/reversed:
Imagine there are two possible worlds: In one of them, Kim is overthrown, in the other he is not. I expect the probability of Hillary being elected will be much higher if he is overthrown. I also expect that the probability of an attack is much higher if Kim is still in office.
If I make a bet on the probability of an attack given that Hillary is elected, and I know that this market will only be settled in the case that she is elected, my estimated probability will incorporate information about the fact that if the market is settled, more likely than not, Kim was overthrown. However, at the time I make the bet, I don’t know whether Kim will be overthrown, so this means my bet incorporates information that causally does not depend on Hillary being elected.
We tried to solve this using the “precommitment” mechanism. In graphical terms, the idea is that this removes all arrows into the election by ensuring that the only cause of who gets to be President is the prediction market itself. My intuition is that it works, but it is certainly something that should be doublechecked by someone with more technical expertise on prediction markets and causality.
Deciding the outcome based on the price in the betting market is the whole point of futarchy. You seem to be saying that prediction markets in absence of futarchy don’t provide good advice on how you should vote. That is an interesting point which I hadn’t considered before your post.
I am still uncomfortable with your example, however. If Kim is overthrown prior to the election, the market rates will adjust based on that information. If he’s overthrown after the election, then there is no causal link between that and the election results, presumably. Prior to his overthrow, the market simply provides the best estimate of the outcome until that result is known. All that means is you shouldn’t make decisions based on outdated estimates that didn’t include all known information.
Yes, the point of futarchy is to make the decision based on the price in the prediction market. What I am saying is that if you want participants to provide their best guesses about which decisions will maximize the outcome, you have to make a credible pre-commitment that the only factor that influences the decision is the prediction market that is currently being traded. You can only make such a commitment for one prediction market per decision (but like you say, the outcome measure can be arbitrarily complex)
I think you are right that once it becomes known whether Kim is overthrown, it is no longer a confounder. Therefore, the bias should be expected to get lower the nearer we get to the decision time point. However, some confounders may be unobservable, or unobserved until the time the decision is made. For instance, if this is not a pure futarchy and there is voting going on, you may gain information from the make-up of the electorate. Imagine there is a referendum on a 70% income tax and Bernie Sanders is running for President. Even if he has no influence on whether the referendum passes, his chances of being elected will be correlated with the outcome of the referendum, and you won’t know which state you are in until you see the exit polls.
What is the word “quantum” doing there? Repeat with me: Quantum superpositions are not about epistemic uncertainty! Quantum superpositions are not about epistemic uncertainty! Quantum superpositions are not about epistemic uncertainty!
An issue about that is that all other things being equal $100 will be worth more if the US is not nuked than if it is.
The issue with this example (and many similar ones) is that to decide between interventions on a variable X from the outside, EDT needs an additional node representing that outside intervention, whereas Pearl-CDT can simply do(X) without the need for an additional variable. If you do add these variables, then conditioning on that variable is the same as intervening on the thing that the variable intervenes on. (Cf. section 3.2.2 “Interventions as variables” in Pearl’s Causality.)
Futarchy’s can’t distinguish between ‘values’ and ‘beliefs’.
It takes domain knowledge and discovery research to realise which values can actually be reduced to belief.
For instance, someone might value ‘healthcare’, thinking that the associated beliefs are ‘activity-costing’ of health budgets on the departmental secretaries recommendations v.s. throwing it all into bednets (for an absurd but illustrative example).
In actual fact, the underlying value may not be healthcare depending on whether the person believes healthcare maximises some confounded higher order value—i.e. health.
However
It’s also strategic in an international context
Depending on what someone believes, they may or may not be trying to maximise for strategy!
I’m learning more here
First of all, I think it would be a good idea to avoid use of the word “confounding” unless you use it with its technical definition, ie, to discuss whether Pr(X|Y) = Pr(X| do(Y); or informally to describe the smoking lesion problem or Simpson’s paradox. I don’t think that is what you are referring to in this case.
I think what you’re getting at is an example Goodhart’s law. See for instance http://lesswrong.com/lw/1ws/the_importance_of_goodharts_law/
Certainly, if you use prediction markets with contracts on G instead of G, people will bet based on their true beliefs about G instead of their true beliefs about G. In this case, futarchy will end up optimizing for G* instead of G (assuming you can find a solution to the confounding problem). I don’t disagree with this criticism of futarchy, but I’m not sure I see the relevance to my post
Why are you doing this normalization? It doesn’t seem related to the 4 contracts on your prediction market in an obvious way.
I’m confused as to how Kim Jong-Un being leader of NK “causes” Hillary to be elected. That seems to go against state 5 in your table.
The normalization is because we want to compare what happens conditional on Hillary being elected to what happens conditional on Jeb is elected. These probabilities will not be comparable unless we normalize.
In the table, Kim being in power has a probabilistic causal effect (or is a marker for something else that has a causal effect) such that the probability of Hillary being elected is 1⁄2 when he is in power and 1⁄3 when he is not in power. I am using the word “cause” in the broad sense that also includes preventative effects.
(This conversation will be confusing after I finish my plans to edit the article as promised yesterday. Apologies to future readers)
Why would we want to do this? Your contracts aren’t structured in such a way that they encourage these sorts of conditional considerations. P(A|B) isn’t on the market. P(A and B) is. Maybe you meant for your contracts to be “If Hillary is elected, the U.S. will be nuked.” ?
You are right that P(A|B) isn’t on the market and that P(A and B) is. However, it is easy to calculate P(A|B) from P(A and B) /P(B) . The problem is that P(A|B) does not help inform you about the correct decision.
You are also right that what we want is something like “If Hillary is elected, the U.S. will be nuked”. However, the problem is that this natural language sentence is ambiguous: It can be interpreted as P(A|B), in which case it will lead to incorrect decisions. Alternatively, it can be interpreted as P[A| Do(B)], which is the information we need, but then it will be very challenging to write the rules of a prediction market such that the expiry conditions incentivize participants to bet their true beliefs.
I understand that we’re capable of calculating P(A|B), but if P(A|B) isn’t on the market, then the market won’t reflect the value of P(A|B). So I don’t understand your statement that the market will somehow get the answer wrong because of its estimate of P(A|B). The market makes no value estimate of that quantity.
Your market, as stated, is really strange in a lot of ways. By having the contracts include “Bush wins” or “Clinton wins” the market is essentially predicting itself. It’s going to have really strong attractors for a landslide victory. It seems like that isn’t what you intend, but it’s going to be the consequence of your current set up. Judging by the number of other people who have also replied that they are confused, you may want to rework this example.
You could have a market that estimates P(A|B) directly using the reversal mechanism (called-off bets). However, I maintain that this will give identical estimates as the markets I proposed. These things are probabilities, they follow the rules of probability logic.
The point I was trying to illustrate, was not that it is impossible to estimate P(A|B), but rather that P(A|B) is not the quantity that a rational decision maker needs in order to optimize for A.
I agree that using a market that directly estimates P(A|B) might have been a better example, because it avoids readers going in the wrong direction when they try to figure out what is going on. However, changing this will take some non-trivial rewriting of the text. I will try to do that when I have more time on my hands.
Your point about the markets predicting themselves is interesting. I was imagining a democracy informed by prediction markets rather than a pure futarchy. However, if the voters are influenced by the market, it does indeed predict itself to some extent. I don’t think this is a major problem, but I will keep thinking about it. I have relatively high confidence that my argument for prediction markets being confounded does not rely on this.
Given how you have set this problem up, what do you think will be the relative prices of the 4 contracts you specified?
In the scenario I provided, the contracts will be traded at the following prices after the demon reveals his information:
Hillary elected and US nuked: $14.3 (1/7 of $100) Hillary elected and US not nuked: $28.6 (2/7) Jeb elected and US nuked: $14.3 (1/7) Jeb elected and US not nuked: $42.9 (3/7)
Like you said, if people change their votes based on the market, the prices may be distorted by the market predicting itself.
This is very interesting and looks quite damning for futarchy. However, I’d like to see a less hypothetical example. Do you have a more concrete example where this could occur?
Also, have you discussed this with Robin Hanson?
Yeah, Laplace’s demon makes it read like ‘Futarchy is screwed in the presence of quasi-omniscient beings out to screw with us’, which doesn’t amount to much as an objection.
However, I suspect you could construct a less supernatural scenario.
People make this objection to CDT w/ Newcomb + Omega (who is quasi-omniscient and wants to mess with us), and it apparently matters a lot to some people.
But I don’t think that’s what this is about.
I see how this would be confusing, and it is definitely possible to construct a more realistic scenario.
The use of Laplace’s demon was an attempt to show that in my scenario, all the information (both causal and conditional) is available to all participants in the market. This allows me to show where all the probabilities come from; readers can just look it up in a table which shows the truth.
It was also intended to illustrate that the problem has nothing to do with information asymmetry or inefficient markets (because the true value of the contract is known to all participants).
Isn’t this more an argument against causal decision theory?
You are confusing the means and the ends. If you have a hammer, you do not necessarily need to put a nail through a board. You might need to extract a screw. Having a hammer is not an argument against the need to extract screws sometimes.
Ok, but then I’d like to know when you need to use causal probabilities for something like this (an honest question).
(This may sound circular): when you are interested in causal effects that are confounded in the data you see. I am not sure exactly what you are asking. The right sequence of events is you first decide what you are interested in, and then find a method to get it, not vice versa.
CDT doesn’t always get the correct answers, but in this case, (the claim is that) CDT does and the prediction market doesn’t.
Can you expand on that? I don’t see how that follows.