Sorry, I meant that humans have narratives they tell about themselves in their action within society. Like, you might want to do fairly abstract ML to build self-driving cars, but you’ll often say sentences like “My job is to do build self-driving cars” or “My job is to move humanity to electric vehicles” or whatever it is when someone asks you “What’s your job” or broadly questions about how to relate to you.
I think I’m still not seeing what you’re saying, though maybe it’s not worth clarifying further. You wrote:
One alternative model is that humans run on narratives, and as such you need to be good at building good narratives for yourself that cut to what you care about and capture key truths, as opposed to narratives that are solely based on what people will reward you for saying about yourself or something else primarily socially mediated rather than mediated by your goals and how reality works.
“[...]and it turns out this just worked to hide narratives from my introspective processes, and I hurt myself by acting according to pretty dumb narratives that I couldn’t introspect on.[...]”
This sounds like your model is something like (at a possibly oversimplified gloss): you have to explain to other people what you’re doing; you’ll act according to what you say to other people that you’re doing; therefore it’s desirable to say to other people descriptions of your behavior that you’d want to act according to. Is that it?
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”. That sounds like what TurnTrout described. So the question is whether that’s a possible thing for a human to be like, and I suspect you’re missing a possibility here. You wrote:
Maybe you will pull the full transhumanist here and break free from the architecture, but I have a lot of prior probability mass on people making many kinds of “ignore the native architecture” mistake.
So I was arguing that humans do lots of successful stuff not based on acting according to what they tell other people they’re doing, like figuring out to drop a rock on a nut, and therefore that one might reasonably hope to live life, or live the part that matters to the one (bringing about the world that one wants), not according to narratives.
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”
Yes, it’s great to realize this possibility, and see the wider space of options available to you, it’s very freeing.
At the same time, I think it’s also just false in many bigger systems of humans, that I don’t have to act according to the descriptions of my behavior that I give to other people. Being part of a company, a church, a school, a community club, a country with laws, lots of parts of that system will move according to the narratives you tell them about yourself, and your options will be changed, and constraints added/removed. Naively not playing the part people expect you to play will lead to you being viewed as deceptive, untrustworthy, and a risk to be around.
I agree most parts of reality aren’t big piles of humans doing things, and I agree that as your plans get increasingly to rest on non-narrative parts of reality, they gain great power and don’t involve much of this sort of social cognitive work. But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
On priors this seems right, yeah. I’d say that “carefully process and update the narratives they tell about themselves” can and in some cases should include a lot more of “okay, so I was doing that stuff because of this narrative; can I extract the motives behind that narrative, filter the ones that seem actually worthwhile on reflection, and reference my future plans to consequentially fulfilling those motives?”. The answer isn’t always “yes” but when it is you can move in the direction of less being controlled by your narratives in general.
Regarding trustworthiness, that seems right, but can be taken as a recommendation to be more transparently not-to-be-relied-upon-in-this-particular way, rather than to more strongly regulate your behavior.
ETA: But I mean, this perspective says that it’s sensible to view it as a mistake to be viewing your life primarily through narratives, right? Like, the mistake isn’t “oh I should’ve just dropped all my narratives, there was no good reason I had them in the first place”, but the mistake is “oh there’s much more desirable states, and it’s a mistake to not have been trending towards those”.
Sorry, I meant that humans have narratives they tell about themselves in their action within society. Like, you might want to do fairly abstract ML to build self-driving cars, but you’ll often say sentences like “My job is to do build self-driving cars” or “My job is to move humanity to electric vehicles” or whatever it is when someone asks you “What’s your job” or broadly questions about how to relate to you.
I think I’m still not seeing what you’re saying, though maybe it’s not worth clarifying further. You wrote:
This sounds like your model is something like (at a possibly oversimplified gloss): you have to explain to other people what you’re doing; you’ll act according to what you say to other people that you’re doing; therefore it’s desirable to say to other people descriptions of your behavior that you’d want to act according to. Is that it?
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”. That sounds like what TurnTrout described. So the question is whether that’s a possible thing for a human to be like, and I suspect you’re missing a possibility here. You wrote:
So I was arguing that humans do lots of successful stuff not based on acting according to what they tell other people they’re doing, like figuring out to drop a rock on a nut, and therefore that one might reasonably hope to live life, or live the part that matters to the one (bringing about the world that one wants), not according to narratives.
I like your paraphrase of my model.
Yes, it’s great to realize this possibility, and see the wider space of options available to you, it’s very freeing.
At the same time, I think it’s also just false in many bigger systems of humans, that I don’t have to act according to the descriptions of my behavior that I give to other people. Being part of a company, a church, a school, a community club, a country with laws, lots of parts of that system will move according to the narratives you tell them about yourself, and your options will be changed, and constraints added/removed. Naively not playing the part people expect you to play will lead to you being viewed as deceptive, untrustworthy, and a risk to be around.
I agree most parts of reality aren’t big piles of humans doing things, and I agree that as your plans get increasingly to rest on non-narrative parts of reality, they gain great power and don’t involve much of this sort of social cognitive work. But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
On priors this seems right, yeah. I’d say that “carefully process and update the narratives they tell about themselves” can and in some cases should include a lot more of “okay, so I was doing that stuff because of this narrative; can I extract the motives behind that narrative, filter the ones that seem actually worthwhile on reflection, and reference my future plans to consequentially fulfilling those motives?”. The answer isn’t always “yes” but when it is you can move in the direction of less being controlled by your narratives in general.
Regarding trustworthiness, that seems right, but can be taken as a recommendation to be more transparently not-to-be-relied-upon-in-this-particular way, rather than to more strongly regulate your behavior.
ETA: But I mean, this perspective says that it’s sensible to view it as a mistake to be viewing your life primarily through narratives, right? Like, the mistake isn’t “oh I should’ve just dropped all my narratives, there was no good reason I had them in the first place”, but the mistake is “oh there’s much more desirable states, and it’s a mistake to not have been trending towards those”.
I agree, it is a mistake to view the narratives as primary, I think. Sort of a figure ground inversion must come, to be in contact with reality.