My primary concern revolves around the potential for AI to inadvertently diminish human creativity and engagement in collaborative projects. Specifically, I’m worried that projects like the Habermas Machine, while potentially innovative, might prioritize AI-generated outputs to the point of replacing the human effort and participation that fuels genuine understanding and creative problem-solving. I believe the paper “Human Creativity in the Age of LLMs” raises similar concerns.
Ultimately, I’m interested in exploring solutions that leverage AI to facilitate human collaboration, fostering mutual understanding and empowering individuals to generate insightful responses together (e.g. force human participant to show their understanding explicitly through read comprehensive tasks again opinions from others). My goal is to build a framework that enhances, rather than replaces, the human element.
Unless you insist that this system is helpful for the powered privileges such as king, as a reference of the public opinion, that will be legit.
I mostly share your concerns. You might appreciate this criticism of the paper here.
@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we’re trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead:
Help users understand/navigate the state of a discussion (e.g. see Talk to the City)
Provide nudges on the meta-level, for example:
Highlight places where more attention is needed (or where a specific person’s input might be most helpful)
“Epistemic Linter” which flags object-level patterns which are not truth seeking
Matchmaking, connecting people who are likely to make progress together
Counterbalancing polarization/groupthink, and steering discussions away from attractors which lead to the discussion getting stuck
I’m also working on a deliberation tool with a similar philosophy, but with a stronger emphasis on generating structured output from participants.
I’ve noticed that discussions can often devolve into arguments, where we fixate on conclusions and pre-existing beliefs, rather than critically examining the underlying methods and prerequisites that shape events or our reasoning. I believe structured self-reflection, like writing an academic paper before engaging in debate, can help. The absence of an audience or judgment during self-reflection encourages participants to be less defensive and more open to reviewing their mental models, frameworks, and methodologies. This can lead to the adoption of more inclusive and generalized mental models that explain previously incompatible phenomena, ultimately leading to broader theories and perspectives. This improved understanding of causal relationships allows us to propose better, more inclusive solutions with fewer unintended consequences, effectively addressing the issues at hand.
I’m particularly interested in how your tool handles matchmaking. In my approach, I’m experimenting with ranking participants based on the content they’ve engaged with, aiming to expose them to more diverse perspectives. A colleague familiar with the Polis system suggested reinforcement learning-based algorithms for this. It seems like we’re tackling similar challenges from slightly different angles.
If someone insist that this system is contributing for the goal of powered privileges such as king to use its power wisely with reference of the public opinion, this system may work for that framework. So I guess that will be legit, even though I don’t appreciate such as framework for maintaining our society.
Disagree with the approach for e-democracy.
My primary concern revolves around the potential for AI to inadvertently diminish human creativity and engagement in collaborative projects. Specifically, I’m worried that projects like the Habermas Machine, while potentially innovative, might prioritize AI-generated outputs to the point of replacing the human effort and participation that fuels genuine understanding and creative problem-solving. I believe the paper “Human Creativity in the Age of LLMs” raises similar concerns.
Ultimately, I’m interested in exploring solutions that leverage AI to facilitate human collaboration, fostering mutual understanding and empowering individuals to generate insightful responses together (e.g. force human participant to show their understanding explicitly through read comprehensive tasks again opinions from others). My goal is to build a framework that enhances, rather than replaces, the human element.
Unless you insist that this system is helpful for the powered privileges such as king, as a reference of the public opinion, that will be legit.
I mostly share your concerns. You might appreciate this criticism of the paper here.
@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we’re trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead:
Help users understand/navigate the state of a discussion (e.g. see Talk to the City)
Provide nudges on the meta-level, for example:
Highlight places where more attention is needed (or where a specific person’s input might be most helpful)
“Epistemic Linter” which flags object-level patterns which are not truth seeking
Matchmaking, connecting people who are likely to make progress together
Counterbalancing polarization/groupthink, and steering discussions away from attractors which lead to the discussion getting stuck
I’m also working on a deliberation tool with a similar philosophy, but with a stronger emphasis on generating structured output from participants.
I’ve noticed that discussions can often devolve into arguments, where we fixate on conclusions and pre-existing beliefs, rather than critically examining the underlying methods and prerequisites that shape events or our reasoning. I believe structured self-reflection, like writing an academic paper before engaging in debate, can help. The absence of an audience or judgment during self-reflection encourages participants to be less defensive and more open to reviewing their mental models, frameworks, and methodologies. This can lead to the adoption of more inclusive and generalized mental models that explain previously incompatible phenomena, ultimately leading to broader theories and perspectives. This improved understanding of causal relationships allows us to propose better, more inclusive solutions with fewer unintended consequences, effectively addressing the issues at hand.
I’m particularly interested in how your tool handles matchmaking. In my approach, I’m experimenting with ranking participants based on the content they’ve engaged with, aiming to expose them to more diverse perspectives. A colleague familiar with the Polis system suggested reinforcement learning-based algorithms for this. It seems like we’re tackling similar challenges from slightly different angles.
Unless you insist that this system is helpful for the powered privileges such as king, as a reference of the public opinion, that will be legit?
If someone insist that this system is contributing for the goal of powered privileges such as king to use its power wisely with reference of the public opinion, this system may work for that framework. So I guess that will be legit, even though I don’t appreciate such as framework for maintaining our society.