If the mind is physically independent of the material body but the physical world is closed, empirical observation of the material body cannot be sufficient to determine the existence of a mind.
Versus:
If the program is [abstractly] independent of the [particular] material computer but the physical world is closed, empirical observation of the material computer cannot be sufficient to determine the existence of a running program.
It’s the [...] that hurts. “It is possible for one’s mind to exist outside of one’s material body.” does not imply “the mind is physically independent of the material body”. It’s physically dependent and abstractly independent.
I did have some difficulty resolving all tensions, but I was able to do so. I found that there were often alternate interpretations of a statement that would resolve a tension but were still plausible interpretations. For example, one that I remember was interpreting some of the questions about “physical body” more generally as “physical substrate”. Sometimes the tension page didn’t offer the question that needed reinterpretation, in which case I deferred the tension until I saw a tension that contained the statement to be reinterpreted.
It definitely does need a lot of work, but I can imagine a tool like this having profound effects on people when all the bugs are worked out and it is applied to mind killers and beliefs/habits where cognitive biases figure prominently.
One major thing that needs to be improved if they intend normal people to use it for normal issues like politics, abortion, etc., is to make the tension page much friendlier. Most LWers have probably studied logic, and can pretty easily interpret the tension explanation, but most people have no clue about logic and won’t understand the implicit implications that aren’t explained (like that contrapositive of “A → B” is valid).
I wouldn’t call it a bad implementation for the occasional wrong reported tension, especially if it’s not completely clear for why it reports such a wrong tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
Good idea, bad implementation. Right now it thinks I have this “tension”, but I’m pretty sure it’s not a tension.
Versus:
It’s the [...] that hurts. “It is possible for one’s mind to exist outside of one’s material body.” does not imply “the mind is physically independent of the material body”. It’s physically dependent and abstractly independent.
I did have some difficulty resolving all tensions, but I was able to do so. I found that there were often alternate interpretations of a statement that would resolve a tension but were still plausible interpretations. For example, one that I remember was interpreting some of the questions about “physical body” more generally as “physical substrate”. Sometimes the tension page didn’t offer the question that needed reinterpretation, in which case I deferred the tension until I saw a tension that contained the statement to be reinterpreted.
It definitely does need a lot of work, but I can imagine a tool like this having profound effects on people when all the bugs are worked out and it is applied to mind killers and beliefs/habits where cognitive biases figure prominently.
One major thing that needs to be improved if they intend normal people to use it for normal issues like politics, abortion, etc., is to make the tension page much friendlier. Most LWers have probably studied logic, and can pretty easily interpret the tension explanation, but most people have no clue about logic and won’t understand the implicit implications that aren’t explained (like that contrapositive of “A → B” is valid).
I wouldn’t call it a bad implementation for the occasional wrong reported tension, especially if it’s not completely clear for why it reports such a wrong tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
You’re calling it a “bad implementation”, because you think you’ve found a tension that in reality is not a tension?
I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.