For what it’s worth, I know of at least one decision theorist who is very familiar with and closely associated with the LessWrong community who at least at one point not long ago leaned toward two-boxing. I think he may have changed his mind since then, but this is at least a data point showing that it’s not a given that philosophers who are closely aligned with LessWrong type of thinking necessarily one-box.
Yeah, I see possible signs of this in the survey data itself—decision theorists strongly favor two-boxing, but a lot of their other answers are surprisingly LW-like if there’s no causal connection like ‘decision theorists are unusually likely to read LW’. It’s one reasonable explanation, anyway.
For what it’s worth, I know of at least one decision theorist who is very familiar with and closely associated with the LessWrong community who at least at one point not long ago leaned toward two-boxing. I think he may have changed his mind since then, but this is at least a data point showing that it’s not a given that philosophers who are closely aligned with LessWrong type of thinking necessarily one-box.
Yeah, I see possible signs of this in the survey data itself—decision theorists strongly favor two-boxing, but a lot of their other answers are surprisingly LW-like if there’s no causal connection like ‘decision theorists are unusually likely to read LW’. It’s one reasonable explanation, anyway.