Monolithic vs subjective
As pointed out it’s hard to gather everyones input to a single result. Rather than have a single fallacy / not fallacy rating have each user be able to express (and own) whether a statement is fallacious. In a usual case it would have the result of “95.4% people think this is a false dictonomy”. However there is valuable information on cross-correlating what arguments pass which evaluator. You could have functionality to “ignore all evaluators that think this is a fair argument”. People could also profilate themselfs as being quality evaluators. There is a problem/feature where the standard of the evaluator need not be rigour. You could for example have a profilic evaluator for each major political leaning. Or you could aggregate the information by cross-referencing proclaimed political identity ie “65% of self-identified democrats think this argument is fair”
Applicability vs context
Being able to target already produced texts means there would be wide applicability. However I am a little concerned on selection effects on what makes it as a “thing to scrutinize”. This kind of thing would be effective about small isolated arguments. However politicians that fit their arguments to fit the situation they are presented in could be wrongly presented in being judged outside of that speech situation. Maybe they know that there are better / more valid arguments for their position but choose to utter those they know their audience can relate to. Bringing those arguments under a close scrutiny would be to partly miss the point. I guess part of the idea would be to apply pressure to always use arguments that could pass harsher standards? However I can see many downsides to that.
I would rather have all the arguments to be processed to be explicitly (re)created in the context of the website. Then it would be clear that everybody involved respects the clean play attitude and that the arguments are meant to be elaborate and precise. This could mean that only the core and essential points would be covered. That is, it would not be a witch hunt to harass other medias but be an internal matter.
explicitness vs summary score
I would have each argument input in a special language/notation that forces every argument to be explicit and computer readable. The arguments would not be prose but collections and networks of semantic tokens. This would provide human language independence ie french and english users would render the tokens in their language but they would be manipulating the same exact ones when one makes a claim in french it would be accessible to the english user too. With the guarantee of computer readableness you could things like compare the axioms of two users and point where contradict, at such a point a discussion is possible. You could then track how often did those discussion shift opinions and which arguments were effective at which populations / belief bases. This could easily be rendered a tool for anti-knowledge seeking testing which manipulations work the best.
If such a reduction is not done the meaning of any end result will be a bit nebulous. Its meaning would depend on the process by which it is produced and it would mask approval of a group in the guise of numeric inarguable data. If the vision of what the “clean play” consist off it could be useful but I doubt there is a single axis that would be so critically important to track. I would rather have metrics that tell stuff but don’t give a conclusion than reach a conclusion I am not sure what it tells.
Monolithic vs subjective As pointed out it’s hard to gather everyones input to a single result. Rather than have a single fallacy / not fallacy rating have each user be able to express (and own) whether a statement is fallacious. In a usual case it would have the result of “95.4% people think this is a false dictonomy”. However there is valuable information on cross-correlating what arguments pass which evaluator. You could have functionality to “ignore all evaluators that think this is a fair argument”. People could also profilate themselfs as being quality evaluators. There is a problem/feature where the standard of the evaluator need not be rigour. You could for example have a profilic evaluator for each major political leaning. Or you could aggregate the information by cross-referencing proclaimed political identity ie “65% of self-identified democrats think this argument is fair”
Applicability vs context Being able to target already produced texts means there would be wide applicability. However I am a little concerned on selection effects on what makes it as a “thing to scrutinize”. This kind of thing would be effective about small isolated arguments. However politicians that fit their arguments to fit the situation they are presented in could be wrongly presented in being judged outside of that speech situation. Maybe they know that there are better / more valid arguments for their position but choose to utter those they know their audience can relate to. Bringing those arguments under a close scrutiny would be to partly miss the point. I guess part of the idea would be to apply pressure to always use arguments that could pass harsher standards? However I can see many downsides to that. I would rather have all the arguments to be processed to be explicitly (re)created in the context of the website. Then it would be clear that everybody involved respects the clean play attitude and that the arguments are meant to be elaborate and precise. This could mean that only the core and essential points would be covered. That is, it would not be a witch hunt to harass other medias but be an internal matter.
explicitness vs summary score I would have each argument input in a special language/notation that forces every argument to be explicit and computer readable. The arguments would not be prose but collections and networks of semantic tokens. This would provide human language independence ie french and english users would render the tokens in their language but they would be manipulating the same exact ones when one makes a claim in french it would be accessible to the english user too. With the guarantee of computer readableness you could things like compare the axioms of two users and point where contradict, at such a point a discussion is possible. You could then track how often did those discussion shift opinions and which arguments were effective at which populations / belief bases. This could easily be rendered a tool for anti-knowledge seeking testing which manipulations work the best. If such a reduction is not done the meaning of any end result will be a bit nebulous. Its meaning would depend on the process by which it is produced and it would mask approval of a group in the guise of numeric inarguable data. If the vision of what the “clean play” consist off it could be useful but I doubt there is a single axis that would be so critically important to track. I would rather have metrics that tell stuff but don’t give a conclusion than reach a conclusion I am not sure what it tells.