But I think part of what makes it confusing is that the distinction between cached thoughts and cognitive strategies is not as clear-cut as you might think.
Clarifies some vague feelings I had while reading the article. Good work.
You might think of Cached Thoughts (CTs) as organized into a rough hierarchy:
level-0 CTs (object level, i.e. ‘apples are good’)
level-1 CTs which are about how to work with/create level-0 CTs (‘try to be quantitative’, ‘think of related more accessible problems’)
level-2 CTs which are about how to work with/create level-1 CTs (‘try to visualize what the world would be like if you were wrong’ )
level-n CTs are about how to work with/create level-(n-1) CTs
The original post was focused on level-2 CTs of course. Obviously this hierarchy isn’t strict and the boundaries are fuzzy, but I do think its meaningful to try to develop level-2 CTs.
Can you clarify what you mean by:
But ideas is the way our culture implements cognitive strategies.
I just meant something like this: the article talks about “cognitive strategies” as something mysterious and unspecified, which the cronophone can somehow read off and transmit. But lacking a cronophone, how can we convey strategies? Well, by expressing them as ideas, e.g. “try to work at several levels of abstractions at the same time”, or “pick a good system of notation”. (At least this is true if you try to improve the thinking patterns of an entire culture—a single individual can probably get better at some things through practice without conceptualizing why).
As for the CT-level, as you say the boundaries are fuzzy. It’s hard to find examples which fit squarely at the third level: “Be quantitative” seems to work when thinking about problem-solving strategies as when think about object level problems, and “visualize the world if you were wrong” works equally well when doubting souls as when doubting the scientific method.
So perhaps it’s enough to try to improve our thinking, and our thinking about thinking will automatically follow? The articles final paragraph notes that “to get nonobvious output, you need nonobvious input”. That’s true for the cronophone, but is it true for thinking in general? If so, that’s slightly discouraging. Compare with “civilization advances by extending the number of important operations which we can perform without thinking about them”—I would hope that just by gradually adding more and more ideas to the pool of things we consider obvious, we will get saner and saner, even if the mental operations we do on those ideas are not particularly novel.
Clarifies some vague feelings I had while reading the article. Good work.
You might think of Cached Thoughts (CTs) as organized into a rough hierarchy:
level-0 CTs (object level, i.e. ‘apples are good’)
level-1 CTs which are about how to work with/create level-0 CTs (‘try to be quantitative’, ‘think of related more accessible problems’)
level-2 CTs which are about how to work with/create level-1 CTs (‘try to visualize what the world would be like if you were wrong’ )
level-n CTs are about how to work with/create level-(n-1) CTs
The original post was focused on level-2 CTs of course. Obviously this hierarchy isn’t strict and the boundaries are fuzzy, but I do think its meaningful to try to develop level-2 CTs.
Can you clarify what you mean by:
(Sorry for the slow reply!)
I just meant something like this: the article talks about “cognitive strategies” as something mysterious and unspecified, which the cronophone can somehow read off and transmit. But lacking a cronophone, how can we convey strategies? Well, by expressing them as ideas, e.g. “try to work at several levels of abstractions at the same time”, or “pick a good system of notation”. (At least this is true if you try to improve the thinking patterns of an entire culture—a single individual can probably get better at some things through practice without conceptualizing why).
As for the CT-level, as you say the boundaries are fuzzy. It’s hard to find examples which fit squarely at the third level: “Be quantitative” seems to work when thinking about problem-solving strategies as when think about object level problems, and “visualize the world if you were wrong” works equally well when doubting souls as when doubting the scientific method.
So perhaps it’s enough to try to improve our thinking, and our thinking about thinking will automatically follow? The articles final paragraph notes that “to get nonobvious output, you need nonobvious input”. That’s true for the cronophone, but is it true for thinking in general? If so, that’s slightly discouraging. Compare with “civilization advances by extending the number of important operations which we can perform without thinking about them”—I would hope that just by gradually adding more and more ideas to the pool of things we consider obvious, we will get saner and saner, even if the mental operations we do on those ideas are not particularly novel.