I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn’t win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)
So my first though was how much bias should I infer as present in these probability estimates? I’m not sure. But that does relate a bit to my other thought.
In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don’t come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.
So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.
What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.
So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts.
I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn’t win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)
So my first though was how much bias should I infer as present in these probability estimates? I’m not sure. But that does relate a bit to my other thought.
In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don’t come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.
So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.
What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.
So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts.