It’s interesting to me that you identify with S2 / the AI / the rider, and regard S1 / the monkey / the elephant as external. I suspect this is pretty common among rationalists. Personally, I identify with S1 / the monkey / the elephant, and regard S2 / the AI / the rider in exactly the way your metaphor suggests—this sort of parasite growing on top of me that’s useful for some purposes, but can also act in ways I find alien and that I work to protect myself from.
Interesting. Testing a theory: Do you ever hear benevolent voices?
You’d be operating much closer to Julian Jaynes’ bicameral mindset than most of us. According to the theory, it was very normal for people in many ancient cultures to consort with hallucinated voices for guidance, and relatively rare for them to solve right hemisphere problems without them(?). The voices of the deceased lingered after death, most gods proper formed as agglomerations of the peoples’ memories of dead kings, experienced from a different angle, as supernatural beings, after the kings died.
You may be more predisposed to developing tulpas. If you get yourself a figurine that looks like it’s about to speak, an, ili, like the olmec used to have, and if you listen closely to it beside a babbling stream, I wonder if you’d begin to hear the voice of your metasystemic, mechanistic problem solver speaking as if it were not a part of you. I wonder what kinds of things it would say.
No, I don’t think I’m particularly bicameral. I’m talking about something more like identity management: in the same way that I have some ability to choose whether I identify as a mathematician, a music lover, a sports fan, etc. I have some ability to choose whether I identify as my S1 or my S2, and I choose to identify as my S1.
I wouldn’t necessarily say separate systems but a tulpa is something much more complex than a simple voice. If you get a decent trance state you can get a voice with a simple suggestion.
Note, all of the auditory hallucinations Jaynes reports are attributed to recurring characters like Zeus, personal spirits, Osiris, they’re always more complex than a disembodied voice as well.
I don’t think the average person in our times who reports that they hear the voice of Jesus has something as complex as a Tulpa (the way it’s described by the Tupla people).
But how can you use complex language to express your long term goals, then, like you’re doing now? Do you get/trick S2 into doing it for you?
I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you’d be giving up too much control.
Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D
At some point, and maybe that point is now, the S1/S2 distinction becomes too vague to be helpful, and it doesn’t help that people can use the terms in very different ways. Let me say something more specific: a common form of internal disagreement is between something like your urges and something like your explicit verbal goals. Many people have the habit of identifying with the part of them that has explicit verbal goals and not endorsing their urges. I don’t; I identify with the part of myself that has urges, and am often suspicious of my explicit verbal goals (a lot of them are for signaling, probably).
It’s interesting to me that you identify with S2 / the AI / the rider, and regard S1 / the monkey / the elephant as external. I suspect this is pretty common among rationalists. Personally, I identify with S1 / the monkey / the elephant, and regard S2 / the AI / the rider in exactly the way your metaphor suggests—this sort of parasite growing on top of me that’s useful for some purposes, but can also act in ways I find alien and that I work to protect myself from.
Interesting. Testing a theory: Do you ever hear benevolent voices?
You’d be operating much closer to Julian Jaynes’ bicameral mindset than most of us. According to the theory, it was very normal for people in many ancient cultures to consort with hallucinated voices for guidance, and relatively rare for them to solve right hemisphere problems without them(?). The voices of the deceased lingered after death, most gods proper formed as agglomerations of the peoples’ memories of dead kings, experienced from a different angle, as supernatural beings, after the kings died.
You may be more predisposed to developing tulpas. If you get yourself a figurine that looks like it’s about to speak, an, ili, like the olmec used to have, and if you listen closely to it beside a babbling stream, I wonder if you’d begin to hear the voice of your metasystemic, mechanistic problem solver speaking as if it were not a part of you. I wonder what kinds of things it would say.
No, I don’t think I’m particularly bicameral. I’m talking about something more like identity management: in the same way that I have some ability to choose whether I identify as a mathematician, a music lover, a sports fan, etc. I have some ability to choose whether I identify as my S1 or my S2, and I choose to identify as my S1.
A tulpa is a lot more than a hallucinated voice. People hearing voices in their heads is quite common.
Mm, you’re right, they may be a completely separate systems, though I’d expect there to be some coincidence.
I wouldn’t necessarily say separate systems but a tulpa is something much more complex than a simple voice. If you get a decent trance state you can get a voice with a simple suggestion.
A tupla takes a lot more work.
Note, all of the auditory hallucinations Jaynes reports are attributed to recurring characters like Zeus, personal spirits, Osiris, they’re always more complex than a disembodied voice as well.
I don’t think the average person in our times who reports that they hear the voice of Jesus has something as complex as a Tulpa (the way it’s described by the Tupla people).
But how can you use complex language to express your long term goals, then, like you’re doing now? Do you get/trick S2 into doing it for you?
I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you’d be giving up too much control.
Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D
At some point, and maybe that point is now, the S1/S2 distinction becomes too vague to be helpful, and it doesn’t help that people can use the terms in very different ways. Let me say something more specific: a common form of internal disagreement is between something like your urges and something like your explicit verbal goals. Many people have the habit of identifying with the part of them that has explicit verbal goals and not endorsing their urges. I don’t; I identify with the part of myself that has urges, and am often suspicious of my explicit verbal goals (a lot of them are for signaling, probably).