Despite some problems with the dual process model, I think of this as a S1/S2 thing.
It’s relatively easy to get an insight into S2. All it takes is a valid argument that convinces you. It’s much harder to get an insight into S1, because that requires a bunch of beliefs to change such that the insight becomes an obvious facet of the world rather than a linguistically specified claim.
We might also think of this in terms of GOFAI. Tokens in a Lisp program aren’t grounded to reality by default. A program can say bananas are yellow but that doesn’t really mean anything until all the terms are grounded. So, to extend the analogy, what’s happening when an insight finally clicks is that the words are now grounded in experience and in some way made real, whereas before they were just words that you could understand abstractly but were’t part of your lived experience. You couldn’t embody the insight yet.
For what it’s worth, this is a big part of what drew me to Buddhist practice. I had plenty of great ideas and advice, but no great methods for making those things real. I needed some practices, like meditation, that would help me ground the things that were beyond my ability to embody just by reading and thinking about them.
Something like the dual process model applied to me in early 2020. My “rational self” (system 2) judged it likely that the novel coronavirus was no longer containable at this point. That we would get a catastrophic global pandemic, like the Spanish flu. Mainly because of a chart I saw on Twitter that compared 2003 SARS case number growth with nCov case number growth. The amount of confirmed cases was still very small, but it was increasing exponentially. Though my gut feeling (system 1) was still judging a global pandemic as unlikely. After all, something like that never happened in my lifetime, and getting double digits of new infections per day didn’t yet seem worrying in the grand scheme of things. Exponential growth isn’t intuitive. Moreover, most people, including rationalists on Twitter, were still talking about other stuff. Only some time later did my gut feeling “catch up” and the realization hit like a hammer. I think it’s important not to forget how early 2020 felt.
Or another example: I currently think (system 2) that a devastating AI catastrophe will occur with some significant probability. But my gut feeling /system 1 still says that everything will surely work differently from how the doomers expect and that we will look naive in hindsight, just as a few years ago nobody expected LLMs to produce oracle AI that basically solves the Turing test, until shortly before it happened.
Those are examples of the system 1 thinking: The situation still looks fairly normal currently, so it will stay normal.
Despite some problems with the dual process model, I think of this as a S1/S2 thing.
It’s relatively easy to get an insight into S2. All it takes is a valid argument that convinces you. It’s much harder to get an insight into S1, because that requires a bunch of beliefs to change such that the insight becomes an obvious facet of the world rather than a linguistically specified claim.
We might also think of this in terms of GOFAI. Tokens in a Lisp program aren’t grounded to reality by default. A program can say
bananas are yellow
but that doesn’t really mean anything until all the terms are grounded. So, to extend the analogy, what’s happening when an insight finally clicks is that the words are now grounded in experience and in some way made real, whereas before they were just words that you could understand abstractly but were’t part of your lived experience. You couldn’t embody the insight yet.For what it’s worth, this is a big part of what drew me to Buddhist practice. I had plenty of great ideas and advice, but no great methods for making those things real. I needed some practices, like meditation, that would help me ground the things that were beyond my ability to embody just by reading and thinking about them.
Something like the dual process model applied to me in early 2020. My “rational self” (system 2) judged it likely that the novel coronavirus was no longer containable at this point. That we would get a catastrophic global pandemic, like the Spanish flu. Mainly because of a chart I saw on Twitter that compared 2003 SARS case number growth with nCov case number growth. The amount of confirmed cases was still very small, but it was increasing exponentially. Though my gut feeling (system 1) was still judging a global pandemic as unlikely. After all, something like that never happened in my lifetime, and getting double digits of new infections per day didn’t yet seem worrying in the grand scheme of things. Exponential growth isn’t intuitive. Moreover, most people, including rationalists on Twitter, were still talking about other stuff. Only some time later did my gut feeling “catch up” and the realization hit like a hammer. I think it’s important not to forget how early 2020 felt.
Or another example: I currently think (system 2) that a devastating AI catastrophe will occur with some significant probability. But my gut feeling /system 1 still says that everything will surely work differently from how the doomers expect and that we will look naive in hindsight, just as a few years ago nobody expected LLMs to produce oracle AI that basically solves the Turing test, until shortly before it happened.
Those are examples of the system 1 thinking: The situation still looks fairly normal currently, so it will stay normal.