Thanks for the helpful comment! I’m glad other people have a sense of the thing I’m describing. Some responses:
I am somewhat skeptical about whether your attempt at conceptually unifying these concerns—i.e., the concept of “rationality realism”—quite works.
I agree that it’s a bit of a messy concept. I do suspect, though, that people who see each of the ideas listed above as “natural” do so because of intuitions that are similar both across ideas and across people. So even if I can’t conceptually unify those intuitions, I can still identify a clustering.
Regardless of whether Hanson is right about how our minds work (and I suspect he is right to a large degree, if not quite entirely right), the question of who we are seems to be a matter of choosing which aspect(s) of our minds’ functioning to endorse as ego-syntonic. Under this view, it is nonsensical to speak of a scenario where it “turns out” that I “am just my system 1”.
I was a bit lazy in expressing it, but I think that the underlying idea makes sense (and have edited to clarify a little). There are certain properties we consider key to our identities, like consistency and introspective access. If we find out that system 2 has much less of those than we thought, then that should make us shift towards identifying more with our system 1s. Also, the idea of choosing which aspects to endorse presupposes some sort of identification with the part of your mind that makes the choice. But I could imagine finding out that this part of my brain is basically just driven by signalling, and then it wouldn’t even endorse itself. That also seems like a reason to default to identifying more with your system 1.
Also, what on earth does “break your thought process” even mean?
An analogy: in maths, a single contradiction “breaks the system” because it can propagate into any other proofs and lead to contradictory conclusions everywhere. In humans, it doesn’t, because we’re much more modular and selectively ignore things. So the relevant question is something like “Are much more intelligent systems necessarily also more math-like, in that they can’t function well without being internally consistent?”
I agree that it’s a bit of a messy concept. I do suspect, though, that people who see each of the ideas listed above as “natural” do so because of intuitions that are similar both across ideas and across people. So even if I can’t conceptually unify those intuitions, I can still identify a clustering.
For the record, and in case I didn’t get this across—I very much agree that identifying this clustering is quite valuable.
As for the challenge of conceptual unification, we ought, I think, to treat it as a separate and additional challenge (and, indeed, we must be open to the possibility that a straightforward unification is not, after all, appropriate).
I was a bit lazy in expressing it, but I think that the underlying idea makes sense (and have edited to clarify a little). There are certain properties we consider key to our identities, like consistency and introspective access. If we find out that system 2 has much less of those than we thought, then that should make us shift towards identifying more with our system 1s. Also, the idea of choosing which aspects to endorse presupposes some sort of identification with the part of your mind that makes the choice. But I could imagine finding out that this part of my brain is basically just driven by signalling, and then it wouldn’t even endorse itself. That also seems like a reason to default to identifying more with your system 1.
I don’t want to go too far down this tangent, as it is not really critical to your main point, but I actually don’t agree with the claim “the idea of choosing which aspects to endorse presupposes some sort of identification with the part of your mind that makes the choice”; that is why I was careful to speak of endorsing aspects of our minds’ functioning, rather than identifying with parts of ourselves. I’ve spoken of, elsewhere, of my skepticism toward the notion of conceptually dividing one’s own mind, and then selecting one of the sections to identify with. But this is a complex topic, and deserves dedicated treatment; best to set it aside for now, I think.
So the relevant question is something like “Are much more intelligent systems necessarily also more math-like, in that they can’t function well without being internally consistent?”
I think that this formulation makes sense.
To me, then, it suggests some obvious follow-up questions, which I touched upon in my earlier reply:
In what sense, exactly, are these purportedly “more intelligent” systems actually “more intelligent”, if they lack the flexibility and robustness of being able to hold contradictions in one’s mind? Or is this merely a flaw in human mental architecture? Might it, rather, be the case that these “more intelligent” systems are simply better than human-like minds at accomplishing their goals, in virtue of their intolerance for inconsistency? But it is not clear how such a claim survives the observation that humans are often inconsistent in what our goals are; it is not quite clear what it means to better accomplish inconsistent goals by being more consistent…
To put it another way, there seems to be some manner of sleight of hand (perhaps an unconscious one) being performed with the concept of “intelligence”. I can’t quite put my finger on the nature of the trick, but something, clearly, is up.
Thanks for the helpful comment! I’m glad other people have a sense of the thing I’m describing. Some responses:
I agree that it’s a bit of a messy concept. I do suspect, though, that people who see each of the ideas listed above as “natural” do so because of intuitions that are similar both across ideas and across people. So even if I can’t conceptually unify those intuitions, I can still identify a clustering.
I was a bit lazy in expressing it, but I think that the underlying idea makes sense (and have edited to clarify a little). There are certain properties we consider key to our identities, like consistency and introspective access. If we find out that system 2 has much less of those than we thought, then that should make us shift towards identifying more with our system 1s. Also, the idea of choosing which aspects to endorse presupposes some sort of identification with the part of your mind that makes the choice. But I could imagine finding out that this part of my brain is basically just driven by signalling, and then it wouldn’t even endorse itself. That also seems like a reason to default to identifying more with your system 1.
An analogy: in maths, a single contradiction “breaks the system” because it can propagate into any other proofs and lead to contradictory conclusions everywhere. In humans, it doesn’t, because we’re much more modular and selectively ignore things. So the relevant question is something like “Are much more intelligent systems necessarily also more math-like, in that they can’t function well without being internally consistent?”
For the record, and in case I didn’t get this across—I very much agree that identifying this clustering is quite valuable.
As for the challenge of conceptual unification, we ought, I think, to treat it as a separate and additional challenge (and, indeed, we must be open to the possibility that a straightforward unification is not, after all, appropriate).
I don’t want to go too far down this tangent, as it is not really critical to your main point, but I actually don’t agree with the claim “the idea of choosing which aspects to endorse presupposes some sort of identification with the part of your mind that makes the choice”; that is why I was careful to speak of endorsing aspects of our minds’ functioning, rather than identifying with parts of ourselves. I’ve spoken of, elsewhere, of my skepticism toward the notion of conceptually dividing one’s own mind, and then selecting one of the sections to identify with. But this is a complex topic, and deserves dedicated treatment; best to set it aside for now, I think.
I think that this formulation makes sense.
To me, then, it suggests some obvious follow-up questions, which I touched upon in my earlier reply:
In what sense, exactly, are these purportedly “more intelligent” systems actually “more intelligent”, if they lack the flexibility and robustness of being able to hold contradictions in one’s mind? Or is this merely a flaw in human mental architecture? Might it, rather, be the case that these “more intelligent” systems are simply better than human-like minds at accomplishing their goals, in virtue of their intolerance for inconsistency? But it is not clear how such a claim survives the observation that humans are often inconsistent in what our goals are; it is not quite clear what it means to better accomplish inconsistent goals by being more consistent…
To put it another way, there seems to be some manner of sleight of hand (perhaps an unconscious one) being performed with the concept of “intelligence”. I can’t quite put my finger on the nature of the trick, but something, clearly, is up.