I mostly share your concerns. You might appreciate this criticism of the paper here.
@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we’re trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead:
Help users understand/navigate the state of a discussion (e.g. see Talk to the City)
Provide nudges on the meta-level, for example:
Highlight places where more attention is needed (or where a specific person’s input might be most helpful)
“Epistemic Linter” which flags object-level patterns which are not truth seeking
Matchmaking, connecting people who are likely to make progress together
Counterbalancing polarization/groupthink, and steering discussions away from attractors which lead to the discussion getting stuck
I’m also working on a deliberation tool with a similar philosophy, but with a stronger emphasis on generating structured output from participants.
I’ve noticed that discussions can often devolve into arguments, where we fixate on conclusions and pre-existing beliefs, rather than critically examining the underlying methods and prerequisites that shape events or our reasoning. I believe structured self-reflection, like writing an academic paper before engaging in debate, can help. The absence of an audience or judgment during self-reflection encourages participants to be less defensive and more open to reviewing their mental models, frameworks, and methodologies. This can lead to the adoption of more inclusive and generalized mental models that explain previously incompatible phenomena, ultimately leading to broader theories and perspectives. This improved understanding of causal relationships allows us to propose better, more inclusive solutions with fewer unintended consequences, effectively addressing the issues at hand.
I’m particularly interested in how your tool handles matchmaking. In my approach, I’m experimenting with ranking participants based on the content they’ve engaged with, aiming to expose them to more diverse perspectives. A colleague familiar with the Polis system suggested reinforcement learning-based algorithms for this. It seems like we’re tackling similar challenges from slightly different angles.
I mostly share your concerns. You might appreciate this criticism of the paper here.
@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we’re trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead:
Help users understand/navigate the state of a discussion (e.g. see Talk to the City)
Provide nudges on the meta-level, for example:
Highlight places where more attention is needed (or where a specific person’s input might be most helpful)
“Epistemic Linter” which flags object-level patterns which are not truth seeking
Matchmaking, connecting people who are likely to make progress together
Counterbalancing polarization/groupthink, and steering discussions away from attractors which lead to the discussion getting stuck
I’m also working on a deliberation tool with a similar philosophy, but with a stronger emphasis on generating structured output from participants.
I’ve noticed that discussions can often devolve into arguments, where we fixate on conclusions and pre-existing beliefs, rather than critically examining the underlying methods and prerequisites that shape events or our reasoning. I believe structured self-reflection, like writing an academic paper before engaging in debate, can help. The absence of an audience or judgment during self-reflection encourages participants to be less defensive and more open to reviewing their mental models, frameworks, and methodologies. This can lead to the adoption of more inclusive and generalized mental models that explain previously incompatible phenomena, ultimately leading to broader theories and perspectives. This improved understanding of causal relationships allows us to propose better, more inclusive solutions with fewer unintended consequences, effectively addressing the issues at hand.
I’m particularly interested in how your tool handles matchmaking. In my approach, I’m experimenting with ranking participants based on the content they’ve engaged with, aiming to expose them to more diverse perspectives. A colleague familiar with the Polis system suggested reinforcement learning-based algorithms for this. It seems like we’re tackling similar challenges from slightly different angles.