I’m not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I’d love to hear them. Including any reasons why it might fail. Because I think it’s an interesting solution to the ‘how to efficiently debate reasonably’.
I’m not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I’d love to hear them. Including any reasons why it might fail. Because I think it’s an interesting solution to the ‘how to efficiently debate reasonably’.