Hmm, so we’ve talked before and you know I’m a bit suspicious about the useful of this enlightenment vs. heaven distinction. Let me try to steelman it within my model of what’s going on here with the human brain and see what happens.
I think of the human brain as primarily performing the activity of minimizing prediction errors. That’s not literally all it does in that “prediction error” is a weird way to talk about what happens in feedback loops where the “prediction” is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories). In this model we’re maximally content when there is literally no prediction error.
There’s three main ways to achieve this. One is to have such such weak models that one has no models to be surprised are wrong. Basically this is the route of becoming a rock. Another is to have such powerful models that one is never surprised. This is the route of becoming an oracle. The last is to do nothing to your models and change the world so you’re never surprised. This is the route of the “child emperor” protected from ever knowing suffering (cf. the story of Siddhartha Gautama as a child).
In this framework, the first two approaches would be your enlightenment perspective and the third would be your heaven perspective.
There’s a problem with all of this, though, if we try to choose between these approaches, which is that it depends on the notion of there being some important distinction between minimizing prediction error one way or another. That is, preferring one approach to another carries with it an assumption that it matters which side of the equation is manipulated to achieve contentment. But such an assumption matters only so long as one is held subject to that assumption; if it can instead be held as object one sees that one can choose freely among them, or at least choose among them by some other criterion.
To me this dissolves the idea of making any kind of enlightenment vs. heaven distinction except conditionally, that is the distinction only makes sense conditioned on a particular ontology that makes certain strong assumptions about what is possible in the world, and if/when those assumptions dissolve the distinction disappears.
I think of the human brain as primarily performing the activity of minimizing prediction errors. That’s not literally all it does in that “prediction error” is a weird way to talk about what happens in feedback loops where the “prediction” is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories). In this model we’re maximally content when there is literally no prediction error.
Even assuming that this is true, why does it need to be the most important level of abstraction to consider?
Certainly there are various mechanisms built on top of predictive processing but there seem to be different mechanisms operating on roughly the same level. Even if there weren’t, you could go some abstraction levels lower and say that the brain is attempting to e.g. maintain a particular balance of chemicals within the skull (whatever combination of chemicals is necessary to keep it alive), or to just follow the laws of physics. Or you could go some levels higher and say that some complicated set of social motivations is what the brain is primarily doing. Etc.
It doesn’t seem obviously wrong to me to say that the brain is primarily performing the activity of minimizing prediction errors, but it also seems not-wrong to me to say that the brain is primarily performing any number of other tasks.
Ah, it’s because I think feedback loops are the relevant base process over which mental activity arises (yes, this is ultimately a kind of panpsychist position that also involves deflating what “consciousness” means). Thus PP is the right abstraction for understanding the kinds of feedback loops brains use.
I think of the human brain as primarily performing the activity of minimizing prediction errors. That’s not literally all it does in that “prediction error” is a weird way to talk about what happens in feedback loops where the “prediction” is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories).
I tend to think that there are several of these, some of which relate to deeper emotional needs, which I think is an important distinction.
if we try to choose between these approaches, which is that it depends on the notion of there being some important distinction between minimizing prediction error one way or another.
I’m stating that in different minds, it sure looks to me like there is indeed a fundamental preference for different ways of minimizing prediction error. I tend to call this “heaven” or “enlightenment” orientation although I think it’s quite correlated with what I’ve heard called “masculine” or “feminine” orientation.
I’m stating that in different minds, it sure looks to me like there is indeed a fundamental preference for different ways of minimizing prediction error. I tend to call this “heaven” or “enlightenment” orientation although I think it’s quite correlated with what I’ve heard called “masculine” or “feminine” orientation.
Sure, we might find preferences, but those preferences must themselves be the result of these same brain processes over which the preferences operate, thus they are not grounded in a way that we can say a person prefers one more than another in anything more than an initial approach.
Thus, at best, I think we can say this distinction between enlightenment and heaven orientation something like a starting orientation but it’s not one I expect to hold up. I guess thinking of them that way I don’t mind them so much, although the way you’ve referred to them reads to me like you are suggestion people have essential dispositions that differ rather than different conditions from which they start.
Sure, we might find preferences, but those preferences must themselves be the result of these same brain processes over which the preferences operate,
Why must they? Surely it’s possible there are parts of the mind that are influenced by other processes outside of the predictive processing components?
It’s pretty clear to me for instance that people act differently when on psychedelics not because somehow they’re making a prediction about what will happen when they’re on psychedelics, but because it’s actually changing the way in which the brain accesses and makes those predictions. So it’s not hard to imagine other chemicals in people’s brains operating at different biological set points fundamentally altering the way their brains would like to update. Not to mention biological brain differences, etc.
It could be starting dispositions as well, that can then be changed, but I don’t see a principled reason that that should be the case.
But then if so much flexibility is possible, what is even producing this distinction between enlightenment and heaven approaches?
I guess I should be clear I’m generally unhappy living with concepts that are descriptive and don’t have gears. So while you might see a pattern that looks like this split, I’m not really satisfied by it so long as we don’t understand the mechanism by which this pattern appears, and I’m generally not willing to stake much on patterns that don’t have causal explanations, hence why I’m poking at this.
Hmm, so we’ve talked before and you know I’m a bit suspicious about the useful of this enlightenment vs. heaven distinction. Let me try to steelman it within my model of what’s going on here with the human brain and see what happens.
I think of the human brain as primarily performing the activity of minimizing prediction errors. That’s not literally all it does in that “prediction error” is a weird way to talk about what happens in feedback loops where the “prediction” is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories). In this model we’re maximally content when there is literally no prediction error.
There’s three main ways to achieve this. One is to have such such weak models that one has no models to be surprised are wrong. Basically this is the route of becoming a rock. Another is to have such powerful models that one is never surprised. This is the route of becoming an oracle. The last is to do nothing to your models and change the world so you’re never surprised. This is the route of the “child emperor” protected from ever knowing suffering (cf. the story of Siddhartha Gautama as a child).
In this framework, the first two approaches would be your enlightenment perspective and the third would be your heaven perspective.
There’s a problem with all of this, though, if we try to choose between these approaches, which is that it depends on the notion of there being some important distinction between minimizing prediction error one way or another. That is, preferring one approach to another carries with it an assumption that it matters which side of the equation is manipulated to achieve contentment. But such an assumption matters only so long as one is held subject to that assumption; if it can instead be held as object one sees that one can choose freely among them, or at least choose among them by some other criterion.
To me this dissolves the idea of making any kind of enlightenment vs. heaven distinction except conditionally, that is the distinction only makes sense conditioned on a particular ontology that makes certain strong assumptions about what is possible in the world, and if/when those assumptions dissolve the distinction disappears.
Even assuming that this is true, why does it need to be the most important level of abstraction to consider?
Certainly there are various mechanisms built on top of predictive processing but there seem to be different mechanisms operating on roughly the same level. Even if there weren’t, you could go some abstraction levels lower and say that the brain is attempting to e.g. maintain a particular balance of chemicals within the skull (whatever combination of chemicals is necessary to keep it alive), or to just follow the laws of physics. Or you could go some levels higher and say that some complicated set of social motivations is what the brain is primarily doing. Etc.
It doesn’t seem obviously wrong to me to say that the brain is primarily performing the activity of minimizing prediction errors, but it also seems not-wrong to me to say that the brain is primarily performing any number of other tasks.
Ah, it’s because I think feedback loops are the relevant base process over which mental activity arises (yes, this is ultimately a kind of panpsychist position that also involves deflating what “consciousness” means). Thus PP is the right abstraction for understanding the kinds of feedback loops brains use.
I tend to think that there are several of these, some of which relate to deeper emotional needs, which I think is an important distinction.
I’m stating that in different minds, it sure looks to me like there is indeed a fundamental preference for different ways of minimizing prediction error. I tend to call this “heaven” or “enlightenment” orientation although I think it’s quite correlated with what I’ve heard called “masculine” or “feminine” orientation.
Sure, we might find preferences, but those preferences must themselves be the result of these same brain processes over which the preferences operate, thus they are not grounded in a way that we can say a person prefers one more than another in anything more than an initial approach.
Thus, at best, I think we can say this distinction between enlightenment and heaven orientation something like a starting orientation but it’s not one I expect to hold up. I guess thinking of them that way I don’t mind them so much, although the way you’ve referred to them reads to me like you are suggestion people have essential dispositions that differ rather than different conditions from which they start.
Why must they? Surely it’s possible there are parts of the mind that are influenced by other processes outside of the predictive processing components?
It’s pretty clear to me for instance that people act differently when on psychedelics not because somehow they’re making a prediction about what will happen when they’re on psychedelics, but because it’s actually changing the way in which the brain accesses and makes those predictions. So it’s not hard to imagine other chemicals in people’s brains operating at different biological set points fundamentally altering the way their brains would like to update. Not to mention biological brain differences, etc.
It could be starting dispositions as well, that can then be changed, but I don’t see a principled reason that that should be the case.
But then if so much flexibility is possible, what is even producing this distinction between enlightenment and heaven approaches?
I guess I should be clear I’m generally unhappy living with concepts that are descriptive and don’t have gears. So while you might see a pattern that looks like this split, I’m not really satisfied by it so long as we don’t understand the mechanism by which this pattern appears, and I’m generally not willing to stake much on patterns that don’t have causal explanations, hence why I’m poking at this.
My guess is that there are attractors in this broad space, similar to other personality differences.