It’s not a question of whether it’s “good news”, but whether it’s a plausible chance of occurring (or rather, whether it’s a “big-enough-feeling” probability).
The quote seems to me to be about how, if you predict something is only about 35% likely, and that thing happens, that’s not sufficient evidence to assume you predicted wrong or to throw out your methodology. The line about the million dollars looks like an example to back up the prior sentence, “that’s a pretty good chance”. Other than that example, the quote doesn’t seem to be about excitement at all really.
Okay, that may be the intent of the argument. Not sure I agree with that, either, though.
Silver’s model is presumably built from several factors. If in the end it gives a prediction that doesn’t come true, then there are likely factors that were considered incorrectly or left out. The “70 % odds” is basically saying “I’m 30% confident that the outcome of the model is wrong.”
If Obama ends up losing, that doesn’t mean Silver knows nothing, but it is evidence that the model was flawed in some meaningful way, as he now suspects.
That is, we should update on ‘Silver’s model was off’ slightly more than ‘Silver’s model was accurate’, despite the fact that he had less than 100% confidence in it.
Remember that Silver is running a monte-carlo type model. In his case, what his ‘odds’ mean are that when he runs the simulation N times, 70% or so of the times, Obama wins, 30% or so Romney wins. So its not “I’m 30% confident the outcome of the model is wrong” its that “30% of the time, the model outputs a Romney victory.”
Okay (though to me that sounds like he has many related models that differ based on certain variables he isn’t certain about… maybe that is being pointlessly pedantic) but would you agree that a R victory would be evidence that the model needs adjustment, stronger evidence than that the model was was reliable as is? If not, what if it was 99 to 1, instead of 60 to 40?
Just trying to clarify my own thinking here.
I don’t know much about the internals of his model, but I would say ‘it depends.’ I’m sure you can use his model to make predictions of the form ‘given Romney victory, what should the final electoral map look like’?, etc, but I’m not sure if the public has that kind of access. Certainly questions like that can be used to probe the model after a Romney or Obama win. If either candidate wins ‘in the wrong way’ (i.e. carries the wrong states), its obviously stronger evidence the model is wrong than we could get from just Romney winning.
‘given Romney victory, what should the final electoral map look like’?
He sometimes selected such maps in blog posts, and generally has one with all of the most likely outcomes, but these maps are not always-and-automatically publicly accessible, so far as I know.
It’s not a question of whether it’s “good news”, but whether it’s a plausible chance of occurring (or rather, whether it’s a “big-enough-feeling” probability).
From the quote, it sounds like its a question of whether odds lower than one’s prior should increase or decrease excitement when the stakes are high.
The quote seems to me to be about how, if you predict something is only about 35% likely, and that thing happens, that’s not sufficient evidence to assume you predicted wrong or to throw out your methodology. The line about the million dollars looks like an example to back up the prior sentence, “that’s a pretty good chance”. Other than that example, the quote doesn’t seem to be about excitement at all really.
Okay, that may be the intent of the argument. Not sure I agree with that, either, though. Silver’s model is presumably built from several factors. If in the end it gives a prediction that doesn’t come true, then there are likely factors that were considered incorrectly or left out. The “70 % odds” is basically saying “I’m 30% confident that the outcome of the model is wrong.” If Obama ends up losing, that doesn’t mean Silver knows nothing, but it is evidence that the model was flawed in some meaningful way, as he now suspects. That is, we should update on ‘Silver’s model was off’ slightly more than ‘Silver’s model was accurate’, despite the fact that he had less than 100% confidence in it.
Remember that Silver is running a monte-carlo type model. In his case, what his ‘odds’ mean are that when he runs the simulation N times, 70% or so of the times, Obama wins, 30% or so Romney wins. So its not “I’m 30% confident the outcome of the model is wrong” its that “30% of the time, the model outputs a Romney victory.”
Okay (though to me that sounds like he has many related models that differ based on certain variables he isn’t certain about… maybe that is being pointlessly pedantic) but would you agree that a R victory would be evidence that the model needs adjustment, stronger evidence than that the model was was reliable as is? If not, what if it was 99 to 1, instead of 60 to 40? Just trying to clarify my own thinking here.
I don’t know much about the internals of his model, but I would say ‘it depends.’ I’m sure you can use his model to make predictions of the form ‘given Romney victory, what should the final electoral map look like’?, etc, but I’m not sure if the public has that kind of access. Certainly questions like that can be used to probe the model after a Romney or Obama win. If either candidate wins ‘in the wrong way’ (i.e. carries the wrong states), its obviously stronger evidence the model is wrong than we could get from just Romney winning.
He sometimes selected such maps in blog posts, and generally has one with all of the most likely outcomes, but these maps are not always-and-automatically publicly accessible, so far as I know.
Sure. But it’s not a huge amount of evidence, Silver predicts that in 10 such elections there will be 3 such “surprises”.