Is there a summary of the timeline in this example? In particular, when do we know if Kim is overthrown? In order for it to be the confounder you describe, we must know before the election—but then simply conditioning on the election result gives the same chance of being nuked in either case (1/2 if kim is still in power and 0 otherwise).
Maybe I’m not following, but the example is not intuitive to me, and seems contrived.
Defining confounders in general is tricky, I won’t attempt to do that. However, I will provide an example of the most relevant timeline where this problem occurs:
You are trading the prediction market at time point t. You expect that at some future timepoint k>t, some event (the confounder) will occur which will affect your estimate of both the probability of the decision at time l>k and the outcome at time m>l. In such a situation, you will bet on the conditional prediction markets based on your beliefs about conditional probabilities, not based on your beliefs about causal quantities.
I provided a simple example where Kim being in power was the only confounder. It was designed such that simply conditioning on Kim being in power would give the correct answer. This was just to illustrate the problem; in real life, it will be more complicated because not everyone will agree on what the confounders are and what needs to be conditioned on.
Why are we told that Kim has been saying how much he hates Hillary if the probability of the US being nuked is the same whether Hillary or Jeb is elected (conditional on Kim being in power)? And why would the probability of Hillary being elected go up if Kim is still in power, in this situation? Even if the actual probability of Kim nuking is the same whether Hillary or Jeb is in office, his statements should lead us to believe otherwise. (I realize the latter has no effect on the analysis—if we switch the probabilities of jeb and hillary being elected in the ‘overthrow’ case, the conclusion remains essentially the same—but I feel it bears pointing out).
Perhaps the reason this example is so counter intuitive is because the quantitative probabilities given do not match the qualitative set-up. The example just seems rather contrived, though I’m having trouble putting my finger on why (other than the above apparent contradiction). For now, the best I can come up with is “the demon has done what the prediction market is supposed to do (that is, evaluate the probabilities of Kim being overthrown or not, and of each candidate winning in each case) and everyone is ignoring it for no particular reason, except that some outside observers are using that information to evaluate [the market’s choice in absence of that information].” But perhaps I’m misunderstanding something?
sorry if this post seems somewhat scattered—I just sort of wrote what I thought of as I thought of it.
The reason I told you that he hates Hillary, is that I wanted to establish Kim as a common cause of the election results and whether the US is nuked. I agree that this should have been stated more clearly. I also agree that the probability of Hillary being elected should probably go up if Kim is still in power (though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). I will update the post tomorrow to make this clearer.
In the absence of other information, the setup would lead us to believe that Hillary being elected would increase the probability of a nuclear attack. However, one of the reasons I introduced Laplace’s demon was to make it clear that this is not the case, and to make it clear that all market participants are in agreement on this issue.
If you have a proposed estimator of a causal quantity and you are wondering if it is valid, one simple sanity check is to assume the causal null hypothesis and find out whether the estimator will be zero. If it is not necessarily zero, the estimator is biased. That is essentially what I tried to do with this example.
In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.
“(though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). ”
Yeah, sure, that works too, though then the ‘threat of being nuked’ seems like a red herring.
“In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.”
That makes a lot more sense than the original post, IMO. I’m still trying to process it entirely though, and figure out how useful such an example is. Thanks for your responses.
Is there a summary of the timeline in this example? In particular, when do we know if Kim is overthrown? In order for it to be the confounder you describe, we must know before the election—but then simply conditioning on the election result gives the same chance of being nuked in either case (1/2 if kim is still in power and 0 otherwise).
Maybe I’m not following, but the example is not intuitive to me, and seems contrived.
Defining confounders in general is tricky, I won’t attempt to do that. However, I will provide an example of the most relevant timeline where this problem occurs:
You are trading the prediction market at time point t. You expect that at some future timepoint k>t, some event (the confounder) will occur which will affect your estimate of both the probability of the decision at time l>k and the outcome at time m>l. In such a situation, you will bet on the conditional prediction markets based on your beliefs about conditional probabilities, not based on your beliefs about causal quantities.
I provided a simple example where Kim being in power was the only confounder. It was designed such that simply conditioning on Kim being in power would give the correct answer. This was just to illustrate the problem; in real life, it will be more complicated because not everyone will agree on what the confounders are and what needs to be conditioned on.
Why are we told that Kim has been saying how much he hates Hillary if the probability of the US being nuked is the same whether Hillary or Jeb is elected (conditional on Kim being in power)? And why would the probability of Hillary being elected go up if Kim is still in power, in this situation? Even if the actual probability of Kim nuking is the same whether Hillary or Jeb is in office, his statements should lead us to believe otherwise. (I realize the latter has no effect on the analysis—if we switch the probabilities of jeb and hillary being elected in the ‘overthrow’ case, the conclusion remains essentially the same—but I feel it bears pointing out).
Perhaps the reason this example is so counter intuitive is because the quantitative probabilities given do not match the qualitative set-up. The example just seems rather contrived, though I’m having trouble putting my finger on why (other than the above apparent contradiction). For now, the best I can come up with is “the demon has done what the prediction market is supposed to do (that is, evaluate the probabilities of Kim being overthrown or not, and of each candidate winning in each case) and everyone is ignoring it for no particular reason, except that some outside observers are using that information to evaluate [the market’s choice in absence of that information].” But perhaps I’m misunderstanding something?
sorry if this post seems somewhat scattered—I just sort of wrote what I thought of as I thought of it.
The reason I told you that he hates Hillary, is that I wanted to establish Kim as a common cause of the election results and whether the US is nuked. I agree that this should have been stated more clearly. I also agree that the probability of Hillary being elected should probably go up if Kim is still in power (though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). I will update the post tomorrow to make this clearer.
In the absence of other information, the setup would lead us to believe that Hillary being elected would increase the probability of a nuclear attack. However, one of the reasons I introduced Laplace’s demon was to make it clear that this is not the case, and to make it clear that all market participants are in agreement on this issue.
If you have a proposed estimator of a causal quantity and you are wondering if it is valid, one simple sanity check is to assume the causal null hypothesis and find out whether the estimator will be zero. If it is not necessarily zero, the estimator is biased. That is essentially what I tried to do with this example.
In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.
“(though you can make an argument that being hated by Kim Jong-un is a net benefit to a US politician). ”
Yeah, sure, that works too, though then the ‘threat of being nuked’ seems like a red herring.
“In the case of my example, the market participants all genuinely believe that there is no causal effect of the election results. However, they are not ignoring it: The contracts are just written such that participants are not asked to bet on the causal effect of the election results, but on conditional probabilities.”
That makes a lot more sense than the original post, IMO. I’m still trying to process it entirely though, and figure out how useful such an example is. Thanks for your responses.