The combination of the two proposed explanations for why certain fields have a higher rate of one-boxing than others seems kind of plausible, but also very suspicious, because being more like decision theorists than like normies (and thus possibly getting more exposure to pro-two-boxing arguments that are popular among decision theorists) seems very similar to being more predisposed to good critical thinking on these sorts of topics (and thus possibly more likely to support one-boxing for correct reasons), so, by combining these two effects, we can explain why people in some subfield might be more likely than average to one-box and also why people in that same subfield might be more likely than average to two-box, and just pick whichever of these explanations correctly predicts whatever people in that field end up answering.
Of course, this complaint makes it seem especially strange state that two-boxing ended up being so popular among decision theorists.
Yeah, I don’t think that combo of hypotheses is totally unfalsifiable (eg, normative ethicists doing so well is IMO a strike against my hypotheses), but it’s definitely flexible enough that it has to get a lot less credit for correct predictions. It’s harder to falsify, so it doesn’t win many points when it’s verified.
Fortunately, both parts of the hypothesis can be tested in some ways separately. E.g., maybe I’m wrong about ‘most non-philosophers one-box’ and the Guardian poll was a fluke; I haven’t double-checked yet, and don’t feel that confident in a single Guardian survey.
The combination of the two proposed explanations for why certain fields have a higher rate of one-boxing than others seems kind of plausible, but also very suspicious, because being more like decision theorists than like normies (and thus possibly getting more exposure to pro-two-boxing arguments that are popular among decision theorists) seems very similar to being more predisposed to good critical thinking on these sorts of topics (and thus possibly more likely to support one-boxing for correct reasons), so, by combining these two effects, we can explain why people in some subfield might be more likely than average to one-box and also why people in that same subfield might be more likely than average to two-box, and just pick whichever of these explanations correctly predicts whatever people in that field end up answering.
Of course, this complaint makes it seem especially strange state that two-boxing ended up being so popular among decision theorists.
Yeah, I don’t think that combo of hypotheses is totally unfalsifiable (eg, normative ethicists doing so well is IMO a strike against my hypotheses), but it’s definitely flexible enough that it has to get a lot less credit for correct predictions. It’s harder to falsify, so it doesn’t win many points when it’s verified.
Fortunately, both parts of the hypothesis can be tested in some ways separately. E.g., maybe I’m wrong about ‘most non-philosophers one-box’ and the Guardian poll was a fluke; I haven’t double-checked yet, and don’t feel that confident in a single Guardian survey.