I think Science and Sanity lays out a framework for dealing with beliefs that doesn’t categories them into true/false that is better than the basic true/false dichomity.
Using a ramified logic with more than two truth values is not the same as not using logic at all!
I care more about what Science and Sanity called semantic reactions than I care about presuppositions.
Basically you feed the relevant data into your mind and then you let it process the data. As a result of processing it there a semantic reaction. Internally the brain does that with a neural net that doesn’t use logical chains to do it’s work.
When I write here I point out the most important piece of the data, but not all of what my reasoning is based on because it’s based on lots of experiences and lots of empiric data.
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble. You have relieved yourself of the burden of explaining how reasoning works without presupposiitons by not treating reasoning as something that necessarily works at all.
Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
I’m not seeing the downside to explicitness.
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”.
Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble.
If you agree that it covers superforcasting than my argument is right.
Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
A LW comment also isn’t long enough to lay out a complete system of reasoning as complex as the one proposed in Science and Sanity or that proposed in Superforcasting. That why I refer to general arguments are refer to the books for a more detailed explanation of particular heuristics.
Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
There’s basically two kinds of reasoning—the kind that can be made manifest (explicit,etc) and the kind that can’t. The gold standard of solving of solving the problem of presuppositions (foundations, intuitions) is to show that nothing presupposition-like is needed in explicit reasoning. Failed attempts tend to switch to implicit reasoning, or to take it that sufficiently obvious presupposiitons don’t count as presuppositions (We can show this with induction...we can show this with empiricism).
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things.
Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix.
Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
Explicit reasoning has a its disadvantages, but is still hard to do without.
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization.
It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one.
In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
1) offered a solution to the problem of ubnfounded foundations.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation.
But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
3) offered a claim that the problem doesn’t exist in the first place.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
3) offered a claim that the problem doesn’t exist in the first place.
No, I do see that the problem exist if you follow certain heuristics.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
What that seems to amount to is “conduct all your reasoning inside a black box”.
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.
Using a ramified logic with more than two truth values is not the same as not using logic at all!
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble. You have relieved yourself of the burden of explaining how reasoning works without presupposiitons by not treating reasoning as something that necessarily works at all.
Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”. Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
If you agree that it covers superforcasting than my argument is right. Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
A LW comment also isn’t long enough to lay out a complete system of reasoning as complex as the one proposed in Science and Sanity or that proposed in Superforcasting. That why I refer to general arguments are refer to the books for a more detailed explanation of particular heuristics.
There’s basically two kinds of reasoning—the kind that can be made manifest (explicit,etc) and the kind that can’t. The gold standard of solving of solving the problem of presuppositions (foundations, intuitions) is to show that nothing presupposition-like is needed in explicit reasoning. Failed attempts tend to switch to implicit reasoning, or to take it that sufficiently obvious presupposiitons don’t count as presuppositions (We can show this with induction...we can show this with empiricism).
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things. Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix. Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Does it address the problem at hand?
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization. It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Answering the question is like answering how some mathematical proof works that goes for 200 pages. GS is a complex system that builds on itself.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
The outside view: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3v49
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one. In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation. But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
No, I do see that the problem exist if you follow certain heuristics.
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.