Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
I’m not seeing the downside to explicitness.
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”.
Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”. Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.