Relatedly, a sense I got from reading posts and comments on LessWrong:
A common reason why a researcher ends up Strawmanning another researcher’s arguments or publicly claiming that another researcher is Strawmanning another researcher is that they never spend the time and attention (because they’re busy/overwhelmed?) to carefully read through the sentences written by the other researcher and consider how those sentences feel ambiguous in meaning and could be open to multiple interpretations.
When your interpretation of what (implicit) claim the author is arguing against seems particularly silly (even if the author plainly put a lot of effort into their writing), I personally think it’s often more productive (for yourself, other readers, and the author) to first paraphrase your interpretation back to the author.
That is, check in with the author in to clarify whether they meant y (your interpretation), before going on a long trail of explanations of why y does not match your interpretation of what the other AI x-safety researcher(s) meant when they wrote z.
Relatedly, a sense I got from reading posts and comments on LessWrong:
A common reason why a researcher ends up Strawmanning another researcher’s arguments or publicly claiming that another researcher is Strawmanning another researcher is that they never spend the time and attention (because they’re busy/overwhelmed?) to carefully read through the sentences written by the other researcher and consider how those sentences feel ambiguous in meaning and could be open to multiple interpretations.
When your interpretation of what (implicit) claim the author is arguing against seems particularly silly (even if the author plainly put a lot of effort into their writing), I personally think it’s often more productive (for yourself, other readers, and the author) to first paraphrase your interpretation back to the author.
That is, check in with the author in to clarify whether they meant y (your interpretation), before going on a long trail of explanations of why y does not match your interpretation of what the other AI x-safety researcher(s) meant when they wrote z.