Scott Aaronson announced Worldview Manager, “a program that attempts to help users uncover hidden inconsistencies in their personal beliefs”.
You can experiment with it here. The initial topics are Complexity Theory, Strong AI, Axiom of Choice, Quantum Computing, Libertarianism, Quantum Mechanics.
I find it hard to believe that you could really think the most likely explanation of the flaws you perceive are that Aaronson and the students that implemented this purposely introduced flaws and are trying to sabotage the work. So why do you utter such nonsense?
And did it not occur to you that disagreeing that children should have the vote could be resolved by being neutral on everybody having the vote, which is what I did after realizing that there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.
Whether you consider this as sabotage or not depends on what you think the goal of the site’s authors was. It certainly wasn’t to help find inconsistencies
in people’s thinking, given the obvious effort that went into constructing questions that had multiple conflicting interpretations.
there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.
I just tried the one for AI and I think its not quite accurate. One of the biggest issues is that I think some of the terms need to be precisely defined and they are not. The other issue I found was that the analysis of my beliefs was not completely accurate because it did not take into account all the answers properly.
I got this conflict between my acceptance of the draft in the unlikely event it would be useful, and my belief that all acts I think the Government should be allowed to do are currently allowed. It doesn’t seem to know of the existence of this supreme court ruling
Interesting link. I played with it for a while. It kept misunderstanding the nuances of my responses, telling me I was wrong when I wasn’t then refusing to listen to my replies. So I stopped playing with it. Two in one day. What are chances?
If the mind is physically independent of the material body but the physical world is closed, empirical observation of the material body cannot be sufficient to determine the existence of a mind.
Versus:
If the program is [abstractly] independent of the [particular] material computer but the physical world is closed, empirical observation of the material computer cannot be sufficient to determine the existence of a running program.
It’s the [...] that hurts. “It is possible for one’s mind to exist outside of one’s material body.” does not imply “the mind is physically independent of the material body”. It’s physically dependent and abstractly independent.
I did have some difficulty resolving all tensions, but I was able to do so. I found that there were often alternate interpretations of a statement that would resolve a tension but were still plausible interpretations. For example, one that I remember was interpreting some of the questions about “physical body” more generally as “physical substrate”. Sometimes the tension page didn’t offer the question that needed reinterpretation, in which case I deferred the tension until I saw a tension that contained the statement to be reinterpreted.
It definitely does need a lot of work, but I can imagine a tool like this having profound effects on people when all the bugs are worked out and it is applied to mind killers and beliefs/habits where cognitive biases figure prominently.
One major thing that needs to be improved if they intend normal people to use it for normal issues like politics, abortion, etc., is to make the tension page much friendlier. Most LWers have probably studied logic, and can pretty easily interpret the tension explanation, but most people have no clue about logic and won’t understand the implicit implications that aren’t explained (like that contrapositive of “A → B” is valid).
I wouldn’t call it a bad implementation for the occasional wrong reported tension, especially if it’s not completely clear for why it reports such a wrong tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
Scott Aaronson announced Worldview Manager, “a program that attempts to help users uncover hidden inconsistencies in their personal beliefs”.
You can experiment with it here. The initial topics are Complexity Theory, Strong AI, Axiom of Choice, Quantum Computing, Libertarianism, Quantum Mechanics.
Mostly agree is a higher degree of agreement than Agree ?
To Somewhat agree that everyone should have the vote and Disagree that children should have the vote is inconsistent ?
Obviously this is the work of the Skrull “Scott Aaronson”, whose thinking is not so clear.
Also, almost every question is so broken as to make answering it completely futile. So much so that it’s hard to believe it was an accident.
I find it hard to believe that you could really think the most likely explanation of the flaws you perceive are that Aaronson and the students that implemented this purposely introduced flaws and are trying to sabotage the work. So why do you utter such nonsense?
And did it not occur to you that disagreeing that children should have the vote could be resolved by being neutral on everybody having the vote, which is what I did after realizing that there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.
Whether you consider this as sabotage or not depends on what you think the goal of the site’s authors was. It certainly wasn’t to help find inconsistencies in people’s thinking, given the obvious effort that went into constructing questions that had multiple conflicting interpretations.
Quite.
I just tried the one for AI and I think its not quite accurate. One of the biggest issues is that I think some of the terms need to be precisely defined and they are not. The other issue I found was that the analysis of my beliefs was not completely accurate because it did not take into account all the answers properly.
Its an interesting idea but needs work.
I didn’t find the lack of precise definitions a problem.
I got this conflict between my acceptance of the draft in the unlikely event it would be useful, and my belief that all acts I think the Government should be allowed to do are currently allowed. It doesn’t seem to know of the existence of this supreme court ruling
Interesting link. I played with it for a while. It kept misunderstanding the nuances of my responses, telling me I was wrong when I wasn’t then refusing to listen to my replies. So I stopped playing with it. Two in one day. What are chances?
Good idea, bad implementation. Right now it thinks I have this “tension”, but I’m pretty sure it’s not a tension.
Versus:
It’s the [...] that hurts. “It is possible for one’s mind to exist outside of one’s material body.” does not imply “the mind is physically independent of the material body”. It’s physically dependent and abstractly independent.
I did have some difficulty resolving all tensions, but I was able to do so. I found that there were often alternate interpretations of a statement that would resolve a tension but were still plausible interpretations. For example, one that I remember was interpreting some of the questions about “physical body” more generally as “physical substrate”. Sometimes the tension page didn’t offer the question that needed reinterpretation, in which case I deferred the tension until I saw a tension that contained the statement to be reinterpreted.
It definitely does need a lot of work, but I can imagine a tool like this having profound effects on people when all the bugs are worked out and it is applied to mind killers and beliefs/habits where cognitive biases figure prominently.
One major thing that needs to be improved if they intend normal people to use it for normal issues like politics, abortion, etc., is to make the tension page much friendlier. Most LWers have probably studied logic, and can pretty easily interpret the tension explanation, but most people have no clue about logic and won’t understand the implicit implications that aren’t explained (like that contrapositive of “A → B” is valid).
I wouldn’t call it a bad implementation for the occasional wrong reported tension, especially if it’s not completely clear for why it reports such a wrong tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.
You’re calling it a “bad implementation”, because you think you’ve found a tension that in reality is not a tension?
I wouldn’t call it a bad implementation for occasional wrong reported tension. To me, the purpose of the service is not to provide 100% coherent and consistent questionnaire. The idea is that it points to conceptions that might contradict themselves. Whether or not they in fact do contradict should be up to a closer investigation. But merely pointing the user to these possible contradictions should prove to be useful, because it’s so difficult find these inconsistencies by oneself.
It seems clear to me that it will generate some false positives. It will also come up chains of logic that aren’t obviously true or false (because it’s impossible to create statements that are completely free of differing interpretations). Of course, the better the implementation in whole (both the logic system and the sets of statements) the less it will generate these false positives and other inconsistencies, but I do think that it’s impossible to remove them all. Instead, the service should perhaps be considered more like a probing machine.
To claim that it’s a bad implementation sounds to me like it’s not a useful implementation at all. Sure, it’ll probably have a relatively many glitches and bugs, but the above comment doesn’t give any particular evidence that the implementation as such doesn’t work correctly. It seems almost equally likely that such possible inconsistencies are an inherent part of this kind of implementation.
If the implementation would constantly point to tensions that are obviously not real tensions (or useful observations in general), then I’d be more inclined to call it a bad implementation. After all, such claim will discourage people from trying out the service and I don’t see reason for such claim in the example cousin_it gave.
The other common complaint seems to be the lack of precise definitions. Again, I see this more like a feature than a bug. When taking the questionnaire, you take whatever definition you have for the concept and with the service you can find out if your definition leads to inconsistent beliefs.