Would be interested to know more about why you think this is “fantastically wrong” and what you think we should do instead. The question the post is trying to answer is, “In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?” I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?
I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it’s worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that—as you suggest—exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I’m thinking about how to live up to that agreement more.
Regarding the rest of it, I did say “or give less weight to them”.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like “Person X wouldn’t go for this” and “That cluster of people that seems good really wouldn’t go for this”, and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as “following the standards that seems credible to me upon reflection”, maybe we don’t disagree too much. If it doesn’t, I’d say it’s a substantial disagreement.
Would be interested to know more about why you think this is “fantastically wrong” and what you think we should do instead. The question the post is trying to answer is, “In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?” I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?
I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it’s worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that—as you suggest—exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I’m thinking about how to live up to that agreement more.
Regarding the rest of it, I did say “or give less weight to them”.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like “Person X wouldn’t go for this” and “That cluster of people that seems good really wouldn’t go for this”, and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as “following the standards that seems credible to me upon reflection”, maybe we don’t disagree too much. If it doesn’t, I’d say it’s a substantial disagreement.