I recently published an article in the Georgetown Security Studies Review (GSSR) on the use of “argument management systems” (e.g., Kialo) for the complex debates that arise in fields where it’s often impractical to resolve disagreements through standard empirical methods (e.g., RCTs). I’ve long been confused why this method of discussion is not more widely supported in EA and Rationalist circles, and am considering adapting my GSSR article to AI policy/safety research (or Longtermism more generally, which various people have criticized for being too speculative/theoretical rather than being based on empirical tests) and posting it here. However, before doing that I would love to get a sense of people’s reasons for skepticism or apathy towards such methods, so I could potentially address them in the post.
For what it’s worth, I have seen Leverage Research’s report on the topic, and I am aware of the criticism that “argument mapping” (in some formats) is overly formal and too complicated. (I plan to respond to these points)
In short, I expect my argument to be fairly similar to what I laid out in my GSSR article: the way we currently present arguments (i.e., predominantly through prose/paragraph text) seems rife with points of failure and inefficiencies, especially given how debates are often not linear but rather branch and have cross-cutting points. In fields like international relations and peace/conflict studies I’ve repeatedly encountered instances where people fail to (seriously) address existing counterarguments, and more generally it is hard for audiences to determine who has or hasn’t addressed counterarguments. In contrast, I think that making one’s arguments more explicit and keeping track of the arguments in a more-searchable and more-permanent format than “memory” or prose would help to mitigate some of these problems.
To me, better methods of argumentation seems like a natural extension of norms that promote statistics/experimental methods in science, but thus far I’ve found the EA/Rationalist communities fairly lukewarm towards the ideas (even if they are more receptive on average than the general public).
[Question] Why is “Argument Mapping” Not More Common in EA/Rationality (And What Objections Should I Address in a Post on the Topic?)
I recently published an article in the Georgetown Security Studies Review (GSSR) on the use of “argument management systems” (e.g., Kialo) for the complex debates that arise in fields where it’s often impractical to resolve disagreements through standard empirical methods (e.g., RCTs). I’ve long been confused why this method of discussion is not more widely supported in EA and Rationalist circles, and am considering adapting my GSSR article to AI policy/safety research (or Longtermism more generally, which various people have criticized for being too speculative/theoretical rather than being based on empirical tests) and posting it here. However, before doing that I would love to get a sense of people’s reasons for skepticism or apathy towards such methods, so I could potentially address them in the post.
For what it’s worth, I have seen Leverage Research’s report on the topic, and I am aware of the criticism that “argument mapping” (in some formats) is overly formal and too complicated. (I plan to respond to these points)
In short, I expect my argument to be fairly similar to what I laid out in my GSSR article: the way we currently present arguments (i.e., predominantly through prose/paragraph text) seems rife with points of failure and inefficiencies, especially given how debates are often not linear but rather branch and have cross-cutting points. In fields like international relations and peace/conflict studies I’ve repeatedly encountered instances where people fail to (seriously) address existing counterarguments, and more generally it is hard for audiences to determine who has or hasn’t addressed counterarguments. In contrast, I think that making one’s arguments more explicit and keeping track of the arguments in a more-searchable and more-permanent format than “memory” or prose would help to mitigate some of these problems.
To me, better methods of argumentation seems like a natural extension of norms that promote statistics/experimental methods in science, but thus far I’ve found the EA/Rationalist communities fairly lukewarm towards the ideas (even if they are more receptive on average than the general public).