I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the ‘Simplicio’ character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn’t try sufficiently hard to pass the ITT of the view they’re arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type “as I understand it, defenders of proposition P might state X, but of course I could be wrong”.
I’ve seen such dialogs, and felt exactly the same way. At least twice I’ve later found out that the dialog actually happened and there was no misrepresentation or simplification, just a HUGE inferential distance about what models of the universe (really, models of groups of people are the main sticking points) should be applied in what circumstances.
‘Simplicio’ character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn’t try sufficiently hard to pass the ITT of the view they’re arguing against.
Possibly this could also be a strength, because by representing the views separately like that it makes it easier to see exactly what assumptions are causing them to fail the ITT.
On the other hand if they’re sufficiently far off, the dialogue basically goes off in the entirely wrong direction.
I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the ‘Simplicio’ character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn’t try sufficiently hard to pass the ITT of the view they’re arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type “as I understand it, defenders of proposition P might state X, but of course I could be wrong”.
I’ve seen such dialogs, and felt exactly the same way. At least twice I’ve later found out that the dialog actually happened and there was no misrepresentation or simplification, just a HUGE inferential distance about what models of the universe (really, models of groups of people are the main sticking points) should be applied in what circumstances.
Possibly this could also be a strength, because by representing the views separately like that it makes it easier to see exactly what assumptions are causing them to fail the ITT.
On the other hand if they’re sufficiently far off, the dialogue basically goes off in the entirely wrong direction.
Do you have examples of dialogues that fail to pass the ITT? I’m curious if you think any of the dialogues I’ve read might have been misleading.