Disagree. It’s valuable to flag the causal process generating an idea, but it’s also valuable to provide legible argumentation, because most people can’t describe the factors which led them to their beliefs in sufficient detail to actually be compelling. Indeed, this is specifically why science works so well: people stopped arguing about intuitions, and started arguing about evidence. And the lack of this is why LW is so bad at arguing about AI risk: people are uninterested in generating legible evidence, and instead focus on presenting intuitions that are typically too fuzzy to examine or evaluate.
It’s valuable to flag the causal process generating an idea, but it’s also valuable to provide legible argumentation, because most people can’t describe the factors which led them to their beliefs in sufficient detail to actually be compelling.
To add to that, trying to provide legible argumentation can also be good because it can convince you that your idea actually doesn’t make sense, or doesn’t make sense as stated, if that is indeed the case.
Disagree. It’s valuable to flag the causal process generating an idea, but it’s also valuable to provide legible argumentation, because most people can’t describe the factors which led them to their beliefs in sufficient detail to actually be compelling. Indeed, this is specifically why science works so well: people stopped arguing about intuitions, and started arguing about evidence. And the lack of this is why LW is so bad at arguing about AI risk: people are uninterested in generating legible evidence, and instead focus on presenting intuitions that are typically too fuzzy to examine or evaluate.
To add to that, trying to provide legible argumentation can also be good because it can convince you that your idea actually doesn’t make sense, or doesn’t make sense as stated, if that is indeed the case.