It’s truly strange seeing you say something like “Very high level epistemic rationality is about retraining one’s brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.” I already compulsively do the thing you talking about training yourself to do! I can’t stop seeing patterns. I don’t claim that the patterns I see are always true, just that’s it’s really easy for me to see them.
For me, thinking is like a gale wind carrying puzzle pieces that dance in the air and assemble themselves in front of me in gigantic structures, without any intervention by me. I do not experience this as an “ability” that I could “train”, because it doesn’t feel like there is any sort of “me” that is doing it: I am merely the passive observer. “Training” pattern recognition sounds as strange to me training vision itself: all I have to do is open my eyes, and it happens. Apparently it isn’t that way for everyone?
The only ways I’ve discovered to train my pattern recognition is to feed myself more information of higher quality (because garbage-in, garbage out), and to train my attention. Once I can learn to notice something, I will start to compulsively see patterns in it. For someone who isn’t compulsively maxing out their pattern recognition already, maybe it’s trainable.
Another example: my brain is often lining people in rows of 3 or 4 according to some collection of traits. There might “something” where Alice has more of it than Bob, and Bob has more of it than Carol. I see them standing next to each other, kind of like pieces on a chessboard. Basically, I think what my brain is doing is some kind of factor analysis where it is identifying unnamed dimensions behind people’s personalities and using them to make predictions. I’m pretty sure that not everyone is constantly doing this, but I could be wrong.
Perhaps someone smarter than me might be able to visualize a larger number of people in multiple dimensions in people-space. That would be pretty cool.
On a trivial level, everyone can do pattern-recognition to some degree, merely by virtue of being a human with general intelligence. Yet some people can synthesize larger amounts of information collected over a longer period of time, update their synthesis faster and more frequently, and can draw qualitatively different sorts of connections.
I think that’s what you are getting at when you talk about pattern recognition being important for epistemic rationality. Pattern recognition is like a mental muscle: some people have it stronger, some people have different types of muscles, and it’s probably trainable. There is only one sort of deduction, but perhaps there are many approaches to induction.
Luke’s description of Carl Shulman reminds me of Ben Kovitz’s description of Introverted Thinking as constantly writing and rewriting a book. When you ask Carl Shulman a question on AI, and he starts giving you facts instead of a straight answer, he is revealing part of his book.
“Many weak arguments” is not how this feels from the inside. From the inside, it all feels like one argument. Except the thing you are hearing from Carl Shulman is really only the tip of the iceberg because he cannot talk fast enough. His real answer to your question involves the totality of his knowledge of AI, or perhaps the totality of the contents of his brain.
For another example of taking arguments in totality vs. in isolation, see King On The Mountain, describing an immature form of Extraverted Thinking:
In the King-on-the-Mountain style of conversation, one person (the King) makes a provocative statement, and requires that others refute it or admit to being wrong. The King is the judge of whether any attempted refutation is successful.
A refutation, for the King to rule it valid, must be completely self-contained: the King will ignore anything outside the way he has conceptualized the topic (see below for an extended illustration). If the refutation involves two or more propositions that must be heard together, the King will rule that each proposition individually fails to refute his statement. He won’t address multiple propositions taken together. Once a proposition is rejected, he gives it no further consideration. A refutation must meet the King’s pre-existing criteria and make sense in terms of the King’s pre-existing way of understanding the subject. The King will rule any suggestion that his criteria are not producing insight as an attempt to cheat.
[…] The amount of information that the King considers at one time is very small: one statement. He makes one decision at a time. He then moves on to the next attempted refutation, putting all previous decisions behind him. The broad panorama—of mathematical, spatial, and temporal relationships between many facts—that makes up the pro-evolution argument, which need to be viewed all at once to be persuasive, cannot get in, unless someone finds a way to package it as a one-step-at-a-time argument (and the King has patience to hear it). Where his opponent was attempting to communicate just one idea, the King heard many separate ideas to be judged one by one.
Some of the failure modes of Introverted Thinking involves seeing imaginary patterns, dealing with corrupted input, or having aesthetic biases (aesthetic bias is when you are biased towards an explanation that look neat or harmonious). Communication is also hard, but your true arguments would take a book to describe, if they could even be put into words at all.
Continuing a bit…
It’s truly strange seeing you say something like “Very high level epistemic rationality is about retraining one’s brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.” I already compulsively do the thing you talking about training yourself to do! I can’t stop seeing patterns. I don’t claim that the patterns I see are always true, just that’s it’s really easy for me to see them.
For me, thinking is like a gale wind carrying puzzle pieces that dance in the air and assemble themselves in front of me in gigantic structures, without any intervention by me. I do not experience this as an “ability” that I could “train”, because it doesn’t feel like there is any sort of “me” that is doing it: I am merely the passive observer. “Training” pattern recognition sounds as strange to me training vision itself: all I have to do is open my eyes, and it happens. Apparently it isn’t that way for everyone?
The only ways I’ve discovered to train my pattern recognition is to feed myself more information of higher quality (because garbage-in, garbage out), and to train my attention. Once I can learn to notice something, I will start to compulsively see patterns in it. For someone who isn’t compulsively maxing out their pattern recognition already, maybe it’s trainable.
Another example: my brain is often lining people in rows of 3 or 4 according to some collection of traits. There might “something” where Alice has more of it than Bob, and Bob has more of it than Carol. I see them standing next to each other, kind of like pieces on a chessboard. Basically, I think what my brain is doing is some kind of factor analysis where it is identifying unnamed dimensions behind people’s personalities and using them to make predictions. I’m pretty sure that not everyone is constantly doing this, but I could be wrong.
Perhaps someone smarter than me might be able to visualize a larger number of people in multiple dimensions in people-space. That would be pretty cool.
On a trivial level, everyone can do pattern-recognition to some degree, merely by virtue of being a human with general intelligence. Yet some people can synthesize larger amounts of information collected over a longer period of time, update their synthesis faster and more frequently, and can draw qualitatively different sorts of connections.
I think that’s what you are getting at when you talk about pattern recognition being important for epistemic rationality. Pattern recognition is like a mental muscle: some people have it stronger, some people have different types of muscles, and it’s probably trainable. There is only one sort of deduction, but perhaps there are many approaches to induction.
Luke’s description of Carl Shulman reminds me of Ben Kovitz’s description of Introverted Thinking as constantly writing and rewriting a book. When you ask Carl Shulman a question on AI, and he starts giving you facts instead of a straight answer, he is revealing part of his book.
“Many weak arguments” is not how this feels from the inside. From the inside, it all feels like one argument. Except the thing you are hearing from Carl Shulman is really only the tip of the iceberg because he cannot talk fast enough. His real answer to your question involves the totality of his knowledge of AI, or perhaps the totality of the contents of his brain.
For another example of taking arguments in totality vs. in isolation, see King On The Mountain, describing an immature form of Extraverted Thinking:
Some of the failure modes of Introverted Thinking involves seeing imaginary patterns, dealing with corrupted input, or having aesthetic biases (aesthetic bias is when you are biased towards an explanation that look neat or harmonious). Communication is also hard, but your true arguments would take a book to describe, if they could even be put into words at all.