I think the elephant/rider analogy gets the mental imagery backwards, and advocate thinking in terms of monkey(s1)/deliberator(s2) instead. It’s system 1 that provides the motive force and actually does things; system 2 is a procedure implemented by system 1 (if system 1 feels like it).
The discussion of identity feels off to me. The monkey identifies as the monkey (so much as it identifies with anything), the deliberator identifies as the deliberator (so much as it identifies with anything). You seem to be talking about some third thing that gets to choose which of the two it wants to identify as; I’m skeptical. I expect the actual difference between you and other rationalists isn’t mostly about identity. (At a minimum, to make sense of it in my model I need to round it to something other than identification.)
When making decisions, by default the deliberator chooses what it wants and the monkey chooses what it wants, and so we have two optimization processes working slightly at cross-purposes. I think this gives rise to hypocrisy (see this post, Robin’s book) that was evolutionarily adaptive, but that today we can get a Pareto improvement by adopting a better compromise between the two.
I don’t think elephant-to-elephant communication is at odds with other people’s riders identifying with their riders. I think people are more skeptical when the claim is that the rider can’t even understand the nature of what is to be communicated, since they have the (I believe correct) view that the rider is a universal understander-of-things in some strong sense.
You seem to be talking about some third thing that gets to choose which of the two it wants to identify as; I’m skeptical.
I’m not talking about a third thing, although I agree that something is off about my framing and I didn’t quite ask the question I meant to ask. In your framing, I’m talking about something like recognizing that the monkey is the one actually doing things, and using the word “I” to refer to the monkey accordingly.
I think people are more skeptical when the claim is that the rider can’t even understand the nature of what is to be communicated, since they have the (I believe correct) view that the rider is a universal understander-of-things in some strong sense.
Can you at least consider the weaker hypothesis that there are things some people know how to communicate elephant-to-elephant but don’t know how to explain to anyone’s riders? (In the same way that for most of human history nobody understood the mechanics of color vision, but everyone could show each other red objects.)
I do think that understanding the mechanics of color vision is not necessary to explain red to the rider. The rider is totally capable of undertanding things like “these things are distinguishable by an attribute that you are not directly aware of, but that is a lot like the difference between green and blue objects to you, and similar to how an eagle is able to see much farther than you can, and similar to how a dog can smell things you cannot” and many similar sentences. I do not think the concept of red has become qualitatively easier to describe with the onset of modern neuroscience (though it has definitely gotten quantitatively easier).
Sorry, I wasn’t clear, the analogy I have in mind for color vision is trying to explain red to someone who lives in a black-and-white-world and doesn’t have any experience of color at all.
I don’t know how much it matters, but I think you’re generalizing from fictional evidence here, in the following sense: If someone truly had no experience of colour, I do not expect that showing them red objects would likely give them much idea of what the experience of seeing red things is like for people who have lived with colour vision all their lives. (Compare those experiments in which cats were raised with no horizontal lines in their environment and grew up insensitive to horizontal features.)
In all likelihood, when I’m talking about ‘identification’ I do not actually exactly mean the elephant vs rider concept.
Instead of a S1 vs S2 division, I’m probably actually talking about a S1-S2 vs S1-S2 division (each side has both). But the associations / cultural understandings are clearer when I call it an S1 vs S2 division.
‘S1’ is the thing that keeps scrolling Facebook even as ‘S2’ is having verbal thoughts like ‘maybe I should stop’, but calling that S2 is not entirely accurate. Actually, something in S1 is causing me to have the thought ‘maybe I should stop’ which is also tied to some more subconscious emotion.
Everything I actually do is tied to S1/elephant in some way.
But for some reason, I have divided the elephant into parts I identify with and other parts I don’t.
I cannot tell if I’ve actually addressed your points or not because I’m having trouble with terms. I felt confused by your second paragraph.
If consciousness (S2) is really just the forum for disputes between S1 submodules vying for access to the motor neurons, then you’re mostly only going to become conscious of S1 activity when those modules are in conflict. You could even say S2 is “awareness and attempted arbitration of conflicts between S1 subminds” and so it makes sense that the floating point of view in your head would identify with whatever motor program it has chosen as the correct one, even while the body refuses to listen.
I think the elephant/rider analogy gets the mental imagery backwards, and advocate thinking in terms of monkey(s1)/deliberator(s2) instead. It’s system 1 that provides the motive force and actually does things; system 2 is a procedure implemented by system 1 (if system 1 feels like it).
The discussion of identity feels off to me. The monkey identifies as the monkey (so much as it identifies with anything), the deliberator identifies as the deliberator (so much as it identifies with anything). You seem to be talking about some third thing that gets to choose which of the two it wants to identify as; I’m skeptical. I expect the actual difference between you and other rationalists isn’t mostly about identity. (At a minimum, to make sense of it in my model I need to round it to something other than identification.)
When making decisions, by default the deliberator chooses what it wants and the monkey chooses what it wants, and so we have two optimization processes working slightly at cross-purposes. I think this gives rise to hypocrisy (see this post, Robin’s book) that was evolutionarily adaptive, but that today we can get a Pareto improvement by adopting a better compromise between the two.
I don’t think elephant-to-elephant communication is at odds with other people’s riders identifying with their riders. I think people are more skeptical when the claim is that the rider can’t even understand the nature of what is to be communicated, since they have the (I believe correct) view that the rider is a universal understander-of-things in some strong sense.
I’m not talking about a third thing, although I agree that something is off about my framing and I didn’t quite ask the question I meant to ask. In your framing, I’m talking about something like recognizing that the monkey is the one actually doing things, and using the word “I” to refer to the monkey accordingly.
Can you at least consider the weaker hypothesis that there are things some people know how to communicate elephant-to-elephant but don’t know how to explain to anyone’s riders? (In the same way that for most of human history nobody understood the mechanics of color vision, but everyone could show each other red objects.)
I do think that understanding the mechanics of color vision is not necessary to explain red to the rider. The rider is totally capable of undertanding things like “these things are distinguishable by an attribute that you are not directly aware of, but that is a lot like the difference between green and blue objects to you, and similar to how an eagle is able to see much farther than you can, and similar to how a dog can smell things you cannot” and many similar sentences. I do not think the concept of red has become qualitatively easier to describe with the onset of modern neuroscience (though it has definitely gotten quantitatively easier).
Sorry, I wasn’t clear, the analogy I have in mind for color vision is trying to explain red to someone who lives in a black-and-white-world and doesn’t have any experience of color at all.
I don’t know how much it matters, but I think you’re generalizing from fictional evidence here, in the following sense: If someone truly had no experience of colour, I do not expect that showing them red objects would likely give them much idea of what the experience of seeing red things is like for people who have lived with colour vision all their lives. (Compare those experiments in which cats were raised with no horizontal lines in their environment and grew up insensitive to horizontal features.)
In all likelihood, when I’m talking about ‘identification’ I do not actually exactly mean the elephant vs rider concept.
Instead of a S1 vs S2 division, I’m probably actually talking about a S1-S2 vs S1-S2 division (each side has both). But the associations / cultural understandings are clearer when I call it an S1 vs S2 division.
‘S1’ is the thing that keeps scrolling Facebook even as ‘S2’ is having verbal thoughts like ‘maybe I should stop’, but calling that S2 is not entirely accurate. Actually, something in S1 is causing me to have the thought ‘maybe I should stop’ which is also tied to some more subconscious emotion.
Everything I actually do is tied to S1/elephant in some way.
But for some reason, I have divided the elephant into parts I identify with and other parts I don’t.
I cannot tell if I’ve actually addressed your points or not because I’m having trouble with terms. I felt confused by your second paragraph.
Yeah, maybe a better frame for the thing I want to talk about is disidentifying with parts of yourself that want things that are ego-dystonic.
don’t like it much, seems to divide it between ‘endorsed’ vs ‘unendorsed’ ?
It’s a little different. Endorsing is a conscious activity, but finding something ego-dystonic isn’t.
If consciousness (S2) is really just the forum for disputes between S1 submodules vying for access to the motor neurons, then you’re mostly only going to become conscious of S1 activity when those modules are in conflict. You could even say S2 is “awareness and attempted arbitration of conflicts between S1 subminds” and so it makes sense that the floating point of view in your head would identify with whatever motor program it has chosen as the correct one, even while the body refuses to listen.