I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.