Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
I’m not seeing the downside to explicitness.
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”.
Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”. Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.