I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that—as you suggest—exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I’m thinking about how to live up to that agreement more.
Regarding the rest of it, I did say “or give less weight to them”.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like “Person X wouldn’t go for this” and “That cluster of people that seems good really wouldn’t go for this”, and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as “following the standards that seems credible to me upon reflection”, maybe we don’t disagree too much. If it doesn’t, I’d say it’s a substantial disagreement.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that—as you suggest—exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I’m thinking about how to live up to that agreement more.
Regarding the rest of it, I did say “or give less weight to them”.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like “Person X wouldn’t go for this” and “That cluster of people that seems good really wouldn’t go for this”, and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as “following the standards that seems credible to me upon reflection”, maybe we don’t disagree too much. If it doesn’t, I’d say it’s a substantial disagreement.