I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things.
Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix.
Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
Explicit reasoning has a its disadvantages, but is still hard to do without.
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization.
It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one.
In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
1) offered a solution to the problem of ubnfounded foundations.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation.
But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
3) offered a claim that the problem doesn’t exist in the first place.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
3) offered a claim that the problem doesn’t exist in the first place.
No, I do see that the problem exist if you follow certain heuristics.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
What that seems to amount to is “conduct all your reasoning inside a black box”.
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things. Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix. Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Does it address the problem at hand?
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization. It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Answering the question is like answering how some mathematical proof works that goes for 200 pages. GS is a complex system that builds on itself.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
The outside view: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3v49
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one. In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation. But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
No, I do see that the problem exist if you follow certain heuristics.
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.