Good responses. I do think a lot of the value is the back-and-forth, and seeing which logic holds up and which doesn’t. Bunch of things to talk about.
First, the discussion of models vs. instincts. I agree that one should sometimes make predictions without an explicit model. I’m not sure whether one can be said to ever not have an implicit model and still be doing the scribe things instead of the actor thing—my modal thinks that when someone like me makes a prediction on instinct there’s an implicit (unconscious) model somewhere, even if it’s bad and would be modified heavily or rejected outright on reflection by system 2.
I do think ‘internal consistency at a given time’ is a valid check on instinctive predictions, perhaps even the best go-to. It’s a way to turn instincts into a rough model slash check to see if your instincts make any sense slash find out what your instincts actually are. It also checks for a bunch of bias issues (e.g. the feminist bank teller thing often becomes obvious even if it was subtle).
Agree that it’s good to predict more in fields without markets rather than with markets. One could explicitly not look at markets until predictions are made; I definitely did that often. It is helpful.
I think the “right” versus “intelligently-made but wrong” thing is at least important semantics. In our society, telling someone they were “wrong” versus “right” is a big deal. At least most people will get wrong impressions of what’s going on if you say that Scott Adams saying (as he explicitly did) 98% Trump in May 2016 “was right” as a baseline. And that happens! They think that should be considered good predicting, because you were super confident and it happened. Or that scene in Zero Dark Thirty, where the woman says “100%” that Osama is where she thinks she is, because that’s how you sound confident in a meeting. If you correct solve the question “what is the probability of X given everything we know now?” and say 75% and then X doesn’t happen, but 75% was the best guess you could have made at the time, I think saying you are “wrong” is both incorrect and going to do net harm. It’s not enough to rely on someone’s long term score, because most people don’t get one, and it would get ignored most of the time even if they did have one.
Biden markets were indeed dumb early on if your report is right, and I missed that boat because I wasn’t thinking about it—I only got into the PredictIt game this time around when Yang got up into the 8% range and there was actual free money. I don’t think it was inevitable he would run but you definitely made an amazing trade. 70% to run plus dominating the pools does not equal 15%! Especially given that when the 70% event occurred, his value more than doubled.
That’s another good metric for evaluating trades/prediction that I forgot to include more explicitly. Look at the new market prices slash probability estimates after an event, and see what that implies about the old prediction. In this case, clearly it says that 15% was stupidly low. I like to think I too would have done that trade if I’d thought about it at the time, maybe even sold other candidates to get more of it, and looked at the general election prices. In hindsight it’s clear 20% is still way too low, but a much smaller mistake and certainly more understandable.
I agree that removing hindsight can be tough. I do think that it is now clear that e.g. Trump not getting nominated would have been extraordinarily hard to have happen without a health issue, but did we have enough information for that? I think we mostly did? But I can’t be sure I’m playing fair here, either.
The 50% approval thing I do think we had unusually uneventful times until Covid-19. Covid-19 put Trump at 48.5% and let’s face it, he had to try really hard to not break 50%, but he did manage it. Wasn’t easy, team effort.
May it seemed to me (at the time) like would keep going until failure but would quit on actual failure, but again hindsight is always a concern.
Also points to, might be good in general to write down basic reasoning when making predictions, to help prevent hindsight bias. And also if you get the right answer for the wrong reasons, in important senses, you can still mark yourself wrong in ways that let one improve.
Good responses. I do think a lot of the value is the back-and-forth, and seeing which logic holds up and which doesn’t. Bunch of things to talk about.
First, the discussion of models vs. instincts. I agree that one should sometimes make predictions without an explicit model. I’m not sure whether one can be said to ever not have an implicit model and still be doing the scribe things instead of the actor thing—my modal thinks that when someone like me makes a prediction on instinct there’s an implicit (unconscious) model somewhere, even if it’s bad and would be modified heavily or rejected outright on reflection by system 2.
I do think ‘internal consistency at a given time’ is a valid check on instinctive predictions, perhaps even the best go-to. It’s a way to turn instincts into a rough model slash check to see if your instincts make any sense slash find out what your instincts actually are. It also checks for a bunch of bias issues (e.g. the feminist bank teller thing often becomes obvious even if it was subtle).
Agree that it’s good to predict more in fields without markets rather than with markets. One could explicitly not look at markets until predictions are made; I definitely did that often. It is helpful.
I think the “right” versus “intelligently-made but wrong” thing is at least important semantics. In our society, telling someone they were “wrong” versus “right” is a big deal. At least most people will get wrong impressions of what’s going on if you say that Scott Adams saying (as he explicitly did) 98% Trump in May 2016 “was right” as a baseline. And that happens! They think that should be considered good predicting, because you were super confident and it happened. Or that scene in Zero Dark Thirty, where the woman says “100%” that Osama is where she thinks she is, because that’s how you sound confident in a meeting. If you correct solve the question “what is the probability of X given everything we know now?” and say 75% and then X doesn’t happen, but 75% was the best guess you could have made at the time, I think saying you are “wrong” is both incorrect and going to do net harm. It’s not enough to rely on someone’s long term score, because most people don’t get one, and it would get ignored most of the time even if they did have one.
Biden markets were indeed dumb early on if your report is right, and I missed that boat because I wasn’t thinking about it—I only got into the PredictIt game this time around when Yang got up into the 8% range and there was actual free money. I don’t think it was inevitable he would run but you definitely made an amazing trade. 70% to run plus dominating the pools does not equal 15%! Especially given that when the 70% event occurred, his value more than doubled.
That’s another good metric for evaluating trades/prediction that I forgot to include more explicitly. Look at the new market prices slash probability estimates after an event, and see what that implies about the old prediction. In this case, clearly it says that 15% was stupidly low. I like to think I too would have done that trade if I’d thought about it at the time, maybe even sold other candidates to get more of it, and looked at the general election prices. In hindsight it’s clear 20% is still way too low, but a much smaller mistake and certainly more understandable.
I agree that removing hindsight can be tough. I do think that it is now clear that e.g. Trump not getting nominated would have been extraordinarily hard to have happen without a health issue, but did we have enough information for that? I think we mostly did? But I can’t be sure I’m playing fair here, either.
The 50% approval thing I do think we had unusually uneventful times until Covid-19. Covid-19 put Trump at 48.5% and let’s face it, he had to try really hard to not break 50%, but he did manage it. Wasn’t easy, team effort.
May it seemed to me (at the time) like would keep going until failure but would quit on actual failure, but again hindsight is always a concern.
Also points to, might be good in general to write down basic reasoning when making predictions, to help prevent hindsight bias. And also if you get the right answer for the wrong reasons, in important senses, you can still mark yourself wrong in ways that let one improve.