Ah sorry, you’re right; the “might” did indeed come later.
I’m saying here that models are really quite separate from narratives, and models don’t dismiss incoming information. Not sure whether you see this point, and whether you agree with it.
Maybe? I do agree that we might use the word “model” for things that don’t necessarily involve narratives or dismissing information; e.g. if I use information gathered from opinion polls to model the results of the upcoming election, then that doesn’t have a particular tendency to dismiss information.
In the context of this discussion, though, I have been talking about “models” in the sense of “the kinds of models that the human brain runs on and which I’m assuming to work something like the human brain is described to work according to predictive processing (and thus having a tendency to sometimes dismiss information)”. And the thing that I’m calling “narratives” form a very significant subset of those.
I think that it makes sense for the OP to say that viewing their life through narratives is a mistake; do you agree with that? The word “ongoing” in your statement seems to imply that one’s behavior must be somewhat governed by stories; is that what you think? If so, why do you think that?
I do think that one’s behavior must be somewhat governed by narratives, since I think of narratives as being models, and you need models to base your behavior on. E.g. the person I quoted originally had “I am a disorganized person” as their narrative; then they switched to “I am an organized person” narrative, which produced better results due to being more accurate. What they didn’t do was to stop having any story about their degree of organization in the first place. (These are narratives in the same sense that something being a blegg or a rube is a narrative; whether something is a blegg or a rube is a mind-produced intuition that we mistakenly take as a reflection of how Something Really Is.)
Even something like “I have a self that survives over time” seems to be a story, and one which humans are pretty strongly hardwired to believe in (on the level of some behaviors, if not explicit beliefs). You can come to see through it more and more through something like advanced meditation, but seeing through it entirely seems to be a sufficiently massive undertaking that I’m not clear if it’s practically feasible for most people.
Probably the main reason for why I think this is the experience of having done a fair amount of meditation and therapy and those leading me to notice an increasing amount of things about myself or the world that seemed just like facts, that were actually stories/models. (Some of the stories are accurate, but they’re still stories.) And this seems to both make theoretical sense in light of what I know about the human brain, and the nature of intelligence in general. And it also matches the experiences of other people who have investigated their experience using these kinds of methods.
In this light, “viewing your life through narratives is a mistake” seems something like a category error. A mistake is something that you do, that you could have elected not to do if you’d known better. But if narratives are something that your brain just does by default, it’s not exactly a mistake you’ve made.
That said, one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
Ok thanks for clarifying. Maybe this thread is quiescable? I’ll respond, but not in a way that adds much, more like just trying to summarize. (I mean feel free to respond; just to say, I’ve gotten my local question answered re/ your beliefs.) In summary, we have a disagreement about what is possible; whether it’s possible to not be a predictive processor. My experience is that I can increase (by detailed effort in various contexts) my general (generalizable to contexts I haven’t specifically made the effort for) tendency to not dismiss incoming information, not require delusion in order to have goals and plans, not behave in a way governed by stories.
if narratives are something that your brain just does by default
Predictive processing may or may not be a good description of low-level brain function, but that doesn’t imply what’s a good idea for us to be and doesn’t imply what we have to be, where what we are is the high-level functioning, the mind / consciousness / agency. Low-level predictive processors are presumably Turing complete and so can be used as substrate for (genuine, updateful, non-action-forcing) models and (genuine, non-delusion-requiring) plans/goals. To the extent we are or can look like that, I do not want to describe us as being relevantly made of predictive processors, like how you can appropriately understand computers as being “at a higher level” than transistors, and how it would be unhelpful to say “computers are fundamentally just transistors”. Like, yes, your computer has a bunch of transistors in it and you have to think about transistors to do some computing tasks and to make modern computers, but, that’s not necessary, and more importantly thinking about transistors is so far from sufficient to understand computation that it’s nearly irrelevant.
one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
For predictive processors, questioning something is tantamount to somewhat deciding against behaving some way. So it’s not just a question of questioning narratives within the predictive processing architecture (in the sense of comparing/modifying/refactoring/deleting/adopting narratives), it’s also a question of decoupling questioning predictions from changing plans.
Ah sorry, you’re right; the “might” did indeed come later.
Maybe? I do agree that we might use the word “model” for things that don’t necessarily involve narratives or dismissing information; e.g. if I use information gathered from opinion polls to model the results of the upcoming election, then that doesn’t have a particular tendency to dismiss information.
In the context of this discussion, though, I have been talking about “models” in the sense of “the kinds of models that the human brain runs on and which I’m assuming to work something like the human brain is described to work according to predictive processing (and thus having a tendency to sometimes dismiss information)”. And the thing that I’m calling “narratives” form a very significant subset of those.
I do think that one’s behavior must be somewhat governed by narratives, since I think of narratives as being models, and you need models to base your behavior on. E.g. the person I quoted originally had “I am a disorganized person” as their narrative; then they switched to “I am an organized person” narrative, which produced better results due to being more accurate. What they didn’t do was to stop having any story about their degree of organization in the first place. (These are narratives in the same sense that something being a blegg or a rube is a narrative; whether something is a blegg or a rube is a mind-produced intuition that we mistakenly take as a reflection of how Something Really Is.)
Even something like “I have a self that survives over time” seems to be a story, and one which humans are pretty strongly hardwired to believe in (on the level of some behaviors, if not explicit beliefs). You can come to see through it more and more through something like advanced meditation, but seeing through it entirely seems to be a sufficiently massive undertaking that I’m not clear if it’s practically feasible for most people.
Probably the main reason for why I think this is the experience of having done a fair amount of meditation and therapy and those leading me to notice an increasing amount of things about myself or the world that seemed just like facts, that were actually stories/models. (Some of the stories are accurate, but they’re still stories.) And this seems to both make theoretical sense in light of what I know about the human brain, and the nature of intelligence in general. And it also matches the experiences of other people who have investigated their experience using these kinds of methods.
In this light, “viewing your life through narratives is a mistake” seems something like a category error. A mistake is something that you do, that you could have elected not to do if you’d known better. But if narratives are something that your brain just does by default, it’s not exactly a mistake you’ve made.
That said, one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
Ok thanks for clarifying. Maybe this thread is quiescable? I’ll respond, but not in a way that adds much, more like just trying to summarize. (I mean feel free to respond; just to say, I’ve gotten my local question answered re/ your beliefs.) In summary, we have a disagreement about what is possible; whether it’s possible to not be a predictive processor. My experience is that I can increase (by detailed effort in various contexts) my general (generalizable to contexts I haven’t specifically made the effort for) tendency to not dismiss incoming information, not require delusion in order to have goals and plans, not behave in a way governed by stories.
Predictive processing may or may not be a good description of low-level brain function, but that doesn’t imply what’s a good idea for us to be and doesn’t imply what we have to be, where what we are is the high-level functioning, the mind / consciousness / agency. Low-level predictive processors are presumably Turing complete and so can be used as substrate for (genuine, updateful, non-action-forcing) models and (genuine, non-delusion-requiring) plans/goals. To the extent we are or can look like that, I do not want to describe us as being relevantly made of predictive processors, like how you can appropriately understand computers as being “at a higher level” than transistors, and how it would be unhelpful to say “computers are fundamentally just transistors”. Like, yes, your computer has a bunch of transistors in it and you have to think about transistors to do some computing tasks and to make modern computers, but, that’s not necessary, and more importantly thinking about transistors is so far from sufficient to understand computation that it’s nearly irrelevant.
For predictive processors, questioning something is tantamount to somewhat deciding against behaving some way. So it’s not just a question of questioning narratives within the predictive processing architecture (in the sense of comparing/modifying/refactoring/deleting/adopting narratives), it’s also a question of decoupling questioning predictions from changing plans.