Nice points. To start, there are a few subtleties involved.
One issue, which I thought I had discussed but which I apparently ended up deleting in an editing phase, is that while I have been referring to the Buddhist concept of dukkha as “suffering”, there are some issues with that particular translation. I have also been using the term “unsatisfactoriness”, which is better in some respects.
The issue is that when we say “suffering”, it tends to refer to a relatively strong experience: if you felt a tiny bit of discomfort from your left sock being slightly itchy, many people would say that this does not count as suffering, it’s just a bit of discomfort. But dukkha also includes your reaction to that kind of very slight discomfort.
Furthermore, you can even have dukkha that you are not conscious of. Often we think that suffering is a subjective experience, so something that you are conscious of by definition. Can you suffer of something without being conscious of the fact that you are suffering? I can avoid this kind of an issue by saying that dukkha is not exactly the same thing as our common-sense definition of suffering, and unlike the common-sense definition, it doesn’t always need to be conscious. Rather, dukkha is something like a training signal that is used by the brain to optimize its functioning and to learn to avoid states with a lot of dukkha: like any other signal in the brain, it has the strongest effect when the signal becomes strong enough to make it to conscious awareness, but it has an effect even if just unconscious.
One example of unconscious dukkha might be this. Sometimes there is a kind of a background discomfort or pain that you have gotten used to, and you think that you are just fine. But once something happens to make that background discomfort go away, you realize how much better you suddenly feel, and that you were actually not okay before.
My model is something like: craving comes in degrees. A lot of factors go into determining how strong it is. Whenever there is craving, there is also dukkha, but if the craving is very subtle, then the dukkha may also be very subtle. There’s a spectrum of how easy it is to notice, going roughly something like:
Only noticeable in extremely deep states of meditative absorption; has barely any effect on decision-making
Hovering near the threshold of conscious awareness, becoming noticeable if it disappears or when there’s nothing else going on that could distract you
Registers as a slight discomfort, but will be pushed away from consciousness by any distraction
Registers as a moderate discomfort that keeps popping up even as other things are going on
Experienced as suffering, obvious and makes it hard to focus on anything else
Extreme suffering, makes it impossible to think about anything else
So when you say that suffering seems to be most strongly associated with wanting conflicting things, I agree with that… that is, I agree that that tends to produce the strongest levels of craving (by making two strong cravings compete against each other), and thus the level of dukkha that we would ordinarily call “suffering”.
At the same time, I also think that there are levels of craving/dukkha that are much subtler, and which may be present even in the case of e.g. imagining a delicious food—they just aren’t strong enough to consciously register, or to have any other effect on decision-making; the main influence in those cases is from non-craving-based motivations. (When the craving is that subtle, there’s also a conflict, but rather than being a conflict between two cravings, it’s a conflict between a craving and how reality is—e.g. “I would like to eat that food” vs. “I don’t actually have any of that food right now”.)
perhaps what you’re saying is that I would have to also think “it would make me happy to eat that, so I should do that in order to be happy.”
I think there’s something like this going on, yes. I mentioned in my previous post that
a craving for some outcome X tends to implicitly involve at least two assumptions:
1. achieving X is necessary for being happy or avoiding suffering
2. one cannot achieve X except by having a craving for it
Both of these assumptions are false, but subsystems associated with craving have a built-in bias to selectively sample evidence which supports these assumptions, making them frequently feel compelling. Still, it is possible to give the brain evidence which lets it know that these assumptions are wrong: that it is possible to achieve X without having craving for it, and that one can feel good regardless of achieving X.
One way that I’ve been thinking of this, is that a craving is a form of a hypothesis, in the predictive processing sense where hypotheses drive behavior by seeking to prove themselves true. For example, your visual system may see someone’s nose and form the hypothesis that “the thing that I’m seeing is a nose, and a nose is part of a person’s face, so I’m seeing someone’s face”. That contains the prediction “faces have eyes next to the nose, so if I look slightly up and to the right I will see an eye, and if I look left from there I will see another eye”; it will then seek to confirm its prediction by making you look at those spots and verify that they do indeed contain eyes.
This is closely related to two points that you’ve talked about before; that people form unconscious beliefs about what they need in order to be happy, and that the mind tends to generate filters which pick out features of experience that support the schema underlying the filter—sometimes mangling the input quite severely to make it fit the filter. The “I’m seeing a face” hypothesis is a filter that picks out the features—such as eyes—which support it. In terms of the above, once a craving hypothesis for X is triggered, it seeks to maintain the belief that happiness requires getting X, focusing on evidence which supports that belief. (To be clear, I’m not saying that all filters are created by craving; rather, craving is one subtype of such a filter.)
My model is that the brain has something like a “master template for craving hypotheses”. Whenever something triggers positive or negative valence, the brain “tries on” the generic template for craving (“I need to get / avoid this in order to be happy”) adapted to this particular source of valence. How strong of a craving is produced, depends on how much evidence can be found to support the hypothesis. If you just imagine a delicious food but aren’t particularly hungry, then there isn’t much of a reason to believe that you need it for your happiness, so the craving is pretty weak. If you are stressed out and seriously need to get some work done, then “I need to relax while I’m on my walk” has more evidence in its favor, so it produces a stronger craving.
One description for the effects of extended meditative practice is “you suffer less, but you notice it more”. Based on the descriptions and my own experience, I think this means roughly the following:
By doing meditative practices, you develop better introspective awareness and ability to pay attention to subtle nuances of what’s going on in your mind.
As your ability to do this improves, you become capable of seeing the craving in your mind more clearly.
All craving hypotheses are ultimately false, because they hold that craving is necessary for avoiding dukkha (discomfort), but actually craving is that which generates dukkha in the first place. Each craving hypothesis attributes dukkha to an external source, when it is actually an internally-generated error signal.
When your introspective awareness and equanimity sharpen enough, your mind can grab onto a particular craving without getting completely pulled into it. This allows you to see that the craving is trying to avoid discomfort, and that it is also creating discomfort by doing so.
Seeing both of these at the same time proves the craving hypothesis false, triggering memory reconsolidation and eliminating the craving.
In order to see the craving clearly enough to eliminate it, your introspective awareness had to become sharper and more capable of magnifying subtle signals to the level of conscious awareness. As a result, as you eliminate strong and moderate-strength cravings, the “detection threshold” for when a craving and its associated dukkha is strong enough to become consciously detectable drops. Cravings and discomforts which were previously too subtle to notice, now start appearing in consciousness.
The end result is that you have less dukkha (suffering) overall, but become better at noticing those parts of it that you haven’t eliminated yet.
There are some similarities between working with craving, and the kind of work with the moral judgment system that you discussed in your post about it. That is, we have learned rules/beliefs which trigger craving in particular situations, just as we have learned rules/beliefs which trigger moral judgment in some situations. As with moral judgment, craving is a system in the brain that cannot be eliminated entirely, and lots of its specific instances need to be eliminated separately—but there are also interventions deeper in the belief network that propagate more widely, eliminating more cravings.
One particular problem with eliminating craving is that even as you eliminate particular instances of it, new craving keeps being generated, as the underlying beliefs about its usefulness are slow to change even as special cases get repeatedly disproven. The claim from Buddhist psychology, which my experience causes me to consider plausible, is that the beliefs which cause cravings to be learned are entangled with beliefs about the self. Changing the beliefs which form the self-model cause changes to craving—as the conception of “I” changes, that changes the kinds of evidence which are taken to support the hypothesis of “I need X to be happy”. Drastic enough updates to the self-model can cause a significant reduction in the amount of craving that is generated, to the point that one can unlearn it faster than it is generated.
Though I think that I’m trying to clarify that it is not merely valence or sensation being located in the self, but that another level of indirection is required, as in your “walk to relax” example…
So for craving, indirection can certainly make it stronger, but at its most basic it’s held to be a very low-level response to any valence. Physical pain and discomfort is the most obvious example: pain is very immediate and present, but if becomes experienced as less self-related, it too becomes less aversive. In an earlier comment, I described an episode in which my sense of self seemed to become temporarily suspended; the result was that strong negative valence (specifically cold shock from an icy shower) was experienced just as strongly and acutely as before, but it lacked the aversive element—I got out of the shower because I was concerned about the health effects of long-term exposure, but could in principle have remained there for longer if I had wanted. I have had other similar experiences since then, but that one was the most dramatic illustration.
In the case of physical pain, the hypothesis seems to be something like “I have to get this sensation of pain out of my consciousness in order to feel good”. If that hypothesis is suspended, one still experiences the sensation of pain, but without the need to get it out of their mind.
(This sometimes feels really weird—you have a painful sensation in your mind, and it feels exactly as painful as always, and you keep expecting yourself to flinch away from it right now… except, you just never do. It just feels really painful and the fact that it feels really painful also does not bother you at all, and you just feel totally confused.)
But the moral judgment system can produce craving/compulsion loops around other people’s behavior, without self-reference! You can go around thinking that other people are doing the wrong thing or should be doing something else, and this creates suffering despite there not being any “self” designated in the thought process. (e.g. “Someone is wrong on the internet!” is not a thought that includes a self whose state is to be manipulated, but rather a judgment that the state of the world is wrong and must be fixed.)
So there’s a subtlety in that the moral judgment system is separate from the craving system, but it does generate valence that the craving system also reacts to, so their operation gets kinda intermingled. (At least, that’s my working model—I haven’t seen any Buddhist theory that would explicitly make these distinctions, though honestly that may very well just be because I haven’t read enough of it.)
So something like:
You witness someone being wrong on the internet
The moral judgment system creates an urge to argue with them
Your mind notices this urge and forms the prediction that resisting it would feel unpleasant, and even though giving into it isn’t necessarily pleasant either, it’s at least less unpleasant than trying to resist the urge
There’s a craving to give in to the urge, consisting of the hypothesis that “I need to give in to this urge and prove the person on the internet wrong, or I will experience greater discomfort than otherwise”
The craving causes you to give in to the urge
This is a nice example of how cravings are often self-fulfilling prophecies. Experiencing a craving is unpleasant; when there is negative valence from resisting an urge, craving is generated which tries to resist that negative valence. The negative valence would not create discomfort by itself, but there is discomfort generated by the combination of “craving + negative valence”. The craving says that “if I don’t give in to the urge, there will be discomfort”… and as soon as you give in to the urge, the craving has gotten you to do what it “wanted” you to do, so it disappears and the discomfort that was associated with it disappears as well. So the craving just “proved” that you had to give in to the urge in order to avoid the discomfort from the negative valence… even though the discomfort was actually produced by the craving itself!
Whereas if you eliminated the craving to avoid this particular discomfort, then the discomfort from resisting the urge would also disappear. Note that this does not automatically mean that you would resist the urge: it just means that you’d have the option to, if you had some reason to do so. But falsifying the beliefs behind the craving is distinct from falsifying the beliefs that triggered the moral judgment system; you might still give in to the urge, if you believed it to be correct and justified. (This is part of my explanation for why it seems that you can reach high levels of enlightenment and see through the experience of the self, and still be a complete jerk towards others.)
This is all very interesting, but I can’t help but notice that this idea of valence doesn’t seem to be paying rent in predictions that are different from what I’d predict without it. And to the extent it does make different predictions, I don’t think they’re accurate, as they predict suffering or unsatisfactoriness where I don’t consciously experience it, and I don’t see what benefit there is to having an invisible dragon in that context.
I mean, sure, you can say there is a conflict between “I want that food” and “I don’t have it”, but this conflict can only arise (in my experience) if there is a different thought behind “I want”, like “I should”. If “I want” but “don’t have”, this state is readily resolved by either a plan to get it, or a momentary sense of loss in letting go of it and moving on to a different food.
In contrast, if “I should” but “don’t have”, then this actually creates suffering, in the form of a mental loop arguing that it should be there, but it isn’t, but it was there, but someone ate it, and they shouldn’t have eaten it, and so on, and so forth, in an undending loop of hard-to-resolve suffering and “unsatisfactoriness”.
In my model, I distinguish between these two kinds of conflict—trivially resolved and virtually irreconcilable—because only one of them is the type that people come to me for help with. ;-) More notably, only one can reasonably be called “suffering”, and it’s also the only one where meditation of some sort might be helpful, since the other will be over before you can start meditating on it. ;-)
If you want to try to reduce this idea further, one way of distinguishing these types of conflict is that “I want” means “I am thinking of myself with this thing in the future”, whereas “I should” means “I am thinking of myself with this thing in the past/present”.
Notice that only one of these thoughts is compatible with the reality of not having the thing in the present. I can not-have food now, and then have-food later. But I can’t not-have food now, and also have-food now, nor can I have-food in the past if I didn’t already. (No time travel allowed!)
Similarly, in clinging to positive things, we are imagining a future negative state, then rejecting it, insisting the positive thing should last forever. It’s not quite as obvious a causality violation as time travel, but it’s close. ;-)
I guess what I’m saying here is that ISTM we experience suffering when our “how things (morally or rightly) ought to be” model conflicts with our “how things actually are” model, by insisting that the past, present, or likely future are “wrong”. This model seems to me to be a lot simpler than all these hypotheses about valence and projections and self-reference and whatnot.
You say that :
You witness someone being wrong on the internet
The moral judgment system creates an urge to argue with them
Your mind notices this urge and forms the prediction that resisting it would feel unpleasant, and even though giving into it isn’t necessarily pleasant either, it’s at least less unpleasant than trying to resist the urge
There’s a craving to give in to the urge, consisting of the hypothesis that “I need to give in to this urge and prove the person on the internet wrong, or I will experience greater discomfort than otherwise”
The craving causes you to give in to the urge
But this seems like adding unnecessary epicycles. The idea of an “urge” does not require the extra steps of “predicting that resisting the urge would be unpleasant” or “having a craving to give in to the urge”, etc., because that’s what “having an urge” means. The other parts of this sequence are redundant; it suffices to say, “I have an urge to argue with that person”, because the urge itself combines both the itch and the desire to scratch it.
Notably, hypothesizing the other parts doesn’t seem to make sense from an evolutionary POV, as it is reasonable to assume that the ability to have “urges” must logically precede the ability to make predictions about the urges, vs. the urges themselves encoding predictions about the outside world. If we have evolved an urge to do something, it is because evolution already “thinks” it’s probably a good idea to do the thing, and/or a bad idea not to, so another mechanism that merely recapitulates this logic would be kind of redundant.
(Not that redundancy can’t happen! After all, our brain is full of it. But such redundancy as described here isn’t necessary to a logical model of craving or suffering, AFAICT.)
Well, whether or not a model is needlessly complex depends on what it needs to explain. :-)
Back when I started thinking about the nature of suffering, I also had a relatively simple model, basically boiling down to “suffering is about wanting conflicting things”. (Upon re-reading that post from nine years back, I see that I credit you for a part of the model that I outlined there. We’ve been at this for a while. :-)) I still had it until relatively recently. But I found that there were things which it didn’t really explain or predict. For example:
You can decouple valence and aversion, so that painful sensations appear just as painful as before, but do not trigger aversion.
Changes to the sense of self cause changes even to the aversiveness of things that don’t seem to be related to a self-model (e.g. physical pain).
You can learn to concentrate better by training your mind to notice that it keeps predicting that indulging in a distraction is going to eliminate the discomfort from the distracting urges, but that it could just as well just drop the distraction entirely.
There are mental moves that you can make to investigate craving, in such a way which causes the mind to notice that maintaining the craving is actually preventing it from feeling good, and then dropping it.
If you can get your mind into states in which there is little or no craving, then those states will feel intrinsically good without regard to their valence.
Upon investigation, you can notice that many states that you had thought were purely pleasant actually contain a degree of subtle discomfort; releasing the craving in those states then gets you into states that are more pleasant overall.
If you train your mind to have enough sensory precision, you can eventually come to directly observe how the mind carries out the kinds of steps that I described under “Let’s say that there is this kind of a process”: an experience being painted with valence, that valence triggering craving, a new self being fabricated by that craving, and so on.
From your responses, it’s not clear to me how much credibility you lend to these kinds of claims. If you feel that meditation doesn’t actually provide any real insight into how minds work and that I’m just deluded, then I think that that’s certainly a reasonable position to hold. I don’t think that that position is true, mind you, but it seems reasonable that you might be skeptical. After all, most of the research on the topic is low quality, there’s plenty of room for placebo and motivated reasoning effects, introspection is famously unreliable, et cetera.
But ISTM that if you are willing to at least grant that me and others who are saying these kinds of things are not outright lying about our subjective experience… then you need to at least explain how come it seems to us like the urge and the aversion from resisting the urge can become decoupled, or why it seems to us like reductions in the sense of self systematically lead to reductions in the aversiveness of negative valence.
I agree that if I were just developing a model of human motivation and suffering from first principles and from what seems to make evolutionary sense, I wouldn’t arrive at this kind of an explanation. “An urge directly combines an itch and the desire to scratch it” would certainly be a much more parsimonious model… but it would then predict that you can’t have an urge without a corresponding need to engage in it, and that prediction is contradicted both by my experience and the experience of many others who engage in these kinds of practices.
No, that’s a good point, as far as it goes. There does seem to be some sort of meta-process that you can use to decouple from craving regarding these things, though in my experience it seems to require continuous attention, like an actively inhibitory process. In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from, and I don’t think that’s predictively accurate.
Your points regarding what’s possible with meditation also make some sense… it’s just that I have trouble reconciling the obvious evolutionary model with “WTF is meditation doing?” in a way that doesn’t produce things that shouldn’t be there.
Consciously, I know it’s possible to become willing to experience things that you previously were unwilling to experience, and that this can eliminate aversion. I model this largely under the second major motivational mechanic, that of risk/reward, effort/payoff.
That is, that system can decide that some negative thing is “worth it” and drop conflict about it. And meditation could theoretically reset the threshold for that, since to some extent meditation is just sitting there, despite the lack of payoff and the considerable payoffs offered to respond to current urges. If this recalibrates the payoff system, it would make sense within my own model, and resolve the part where I don’t see how what you describe could be a truly conscious process, in the way that you made it sound.
IOW, I might more say that part of our motivational system is a module for determining what urges should be acted upon and which are not worth it, or perhaps that translates mind/body/external states into urges or lack thereof, and that you can retrain this system to have different baselines for what constitutes “urge”-ency. ;-) (And thus, a non-conscious version of “valence” in your model.)
That doesn’t quite work either, because ISTM that meditation changes the threshold for all urges, not just the specific ones trained. Also, the part about identification isn’t covered here either. It might be yet another system being trained, perhaps the elusive “executive function” system?
On the other hand, I find that the Investor (my name for the risk/reward, effort/payoff module) is easily tricked into dropping urges for reasons other than self-identification. For example, the Investor can be tricked into letting you get out of a warm bed into a cold night if you imagine you have already done so. By imagining that you are already cold, there is nothing to be gained by refraining from getting up, and this shifts the “valence”, as you call it, in favor of getting up, because the Investor fundamentally works on comparing projections against an “expected status quo”. So if you convince it that some other status quo is “expected”, it can be made to go along with almost anything.
And so I suppose if you imagine that it is not you who is the one who is going to be cold, then that might work just as well. Or perhaps making it “not me” somehow convinces the Investor that the changes in state are not salient to its evaluations?
Hm. Now that my attention has been drawn to this, it’s like an itch I need to scratch. :) I am wondering now, “Wait, why is the Investor so easily tricked?” And for that matter, given that it is so easily tricked, could the feats attributed to long-term meditation be accomplished in general using such tricks? Can I imagine my way to no-self and get the benefits without meditating, even if only temporarily?
Also, I wonder if I have been overlooking the possibility to use Investor mind-tricks to deal with task-switching inertia, which is very similar to having to get out of a warm bed. What if I imagine I have already changed tasks? Hm. Also, if I am imagining no-self, will starting unpleasant tasks be less aversive?
Okay, I’m off to experiment now. This is exciting!
I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.
With regards to the ‘mental motion’:
In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from [...]
As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one’s mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect this part to the global workspace and thus consciousness, which allows noticing and influencing the decision. If this connection is strong enough and can be activated consciously, it can make sense to call this process a mental motion.
There does seem to be some sort of meta-process that you can use to decouple from craving regarding these things, though in my experience it seems to require continuous attention, like an actively inhibitory process. In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from, and I don’t think that’s predictively accurate.
An analogy that I might use is that learning to let go of craving, is kind of the opposite of the thing where you practice an effortful thing until it becomes automatic. Craving usually triggers automatically and outside your conscious control, but you can come to gradually increase your odds of being able to notice it, catch it, and do something about it.
“An actively inhibitory process” sounds accurate for some of the mental motions involved. Though merely just bringing more conscious attention to the process also seems to affect it, and in some cases interrupt it, even if you don’t actively inhibit it.
If this recalibrates the payoff system, it would make sense within my own model, and resolve the part where I don’t see how what you describe could be a truly conscious process, in the way that you made it sound.
Not sure how I made it sound :-) but a good description might be “semi-conscious”, in the same sense that something like Focusing can be: you do it, something conscious comes up, and then a change might happen. Sometimes enough becomes consciously accessible that you can clearly see what it was about, sometimes you just get a weird sensation and know that something has shifted, without knowing exactly what.
Okay, I’m off to experiment now. This is exciting!
Eh. Sorta? I’ve been busy with clients the last few days, not a lot of time for experimenting. I have occasionally found myself, or rather, found not-myself, several times, almost entirely accidentally or incidentally. A little like a perspective shift changing between two possible interpretations of an image; or more literally, like a shift between first-person, and third-person-over-the-shoulder in a video game.
In the third person perspective, I can observe limbs moving, feel the keys under my fingers as they type, and yet I am not the one who’s doing it. (Which, I suppose, I never really was anyway.)
TBH, I’m not sure if it’s that I haven’t found any unpleasant experiences to try this on, or if it’s more that because I’ve been spontaneously shifting to this state, I haven’t found anything to be an unpleasant experience. :-)
Cool, that sounds like a mild no-self state alright. :) Though any strong valence is likely to trigger a self schema and pull you out of it, but it’s a question of practice.
Your description kinda reminds me of the approach in Loch Kelly’s The Way of Effortless Mindfulness; it has various brief practices that may induce states like the one that you describe. E.g. in this one, you imagine the kind of a relaxing state in which there is no problem to solve and the sense of self just falls away. (Directly imagining a no-self state is hard, because checking whether you are in a no-self state yet activates the self-schema. But if you instead imagine an external state which is likely to put you in a no-self state, you don’t get that kind of self-reference, no pun intended.)
First, read this mindful glimpse below. Next, choose a memory of a time you felt a sense of freedom, connection, and well-being. Then do this mindful glimpse using your memory as a door to discover the effortless mindfulness that is already here now.
1. Close your eyes. Picture a time when you felt well-being while doing something active like hiking in nature. In your mind, see and feel every detail of that day. Hear the sounds, smell the smells, and feel the air on your skin; notice the enjoyment of being with your companions or by yourself; recall the feeling of walking those last few yards toward your destination.
2. Visualize and feel yourself as you have reached your goal and are looking out over the wide-open vista. Feel that openness, connection to nature, sense of peace and well-being. Having reached your goal, feel what it’s like when there’s no more striving and nothing to do. See that wide-open sky with no agenda to think about, and then simply stop. Feel this deep sense of relief and peace.
3. Now, begin to let go of the visualization, the past, and all associated memories slowly and completely. Remain connected to the joy of being that is here within you.
4. As you open your eyes, feel how the well-being that was experienced then is also here now. It does not require you to go to any particular place in the past or the future once it’s discovered within and all around.
Recently I’ve also gotten interested in the Alexander Technique, which seems to have a pretty straightforward series of steps for expanding your awareness and then getting your mind to just automatically do things in a way which feels like non-doing. It also seems to induce the kinds of states that you describe, of just watching oneself work, which I had previously only gotten from meditation.
Can you pick up a ball without trying to pick up the ball? It sounds contradictory, but it turns out that there is a specific behaviour we do when we are “trying”, and this behaviour is unnecessary to pick up the ball.
How is this possible? Well, consider when you’ve picked up something to fiddle with without realising. You didn’t consciously intend for it to end up in your hand, but there it is. There was an effortlessness to it. [...]
But this kind of non-‘deliberate’ effortless action needn’t be automatic and unchosen, like a nervous fiddling habit; nor need it require redirected attention / collapsed awareness, like not noticing you picked up the object. You can be fully aware of what you’re doing, and ‘watch’ yourself doing it, while choosing to do it, and yet still have there be this effortless “it just happened” quality. [...]
Suppose you do actually want to pick up that ball over there. But you don’t want to ‘do’ picking-up-the-ball. The solution is to set an intention.
[1] Have the intention to pick up the ball. [2] Expand your awareness to include what’s all around you, the room, the route to the ball, and your body inside the room. [3] Notice any reactions of trying to do picking-up-the-ball (like “I am going to march over there and pick up that ball”, or “I am going to get ready to stand up so I can go pick up that ball”, or “I am going to approach the ball to pick it up”) — and decline those reactions. [4] Wait. Patiently hold the intention to pick up the ball. Don’t stop yourself from moving — stopping yourself is another kind of ‘doing’ — yet don’t try to deliberately/consciously move. [5] Let movement happen.
Notably, hypothesizing the other parts doesn’t seem to make sense from an evolutionary POV, as it is reasonable to assume that the ability to have “urges” must logically precede the ability to make predictions about the urges, vs. the urges themselves encoding predictions about the outside world. If we have evolved an urge to do something, it is because evolution already “thinks” it’s probably a good idea to do the thing, and/or a bad idea not to, so another mechanism that merely recapitulates this logic would be kind of redundant.
A hypothesis that I’ve been considering, is whether the shift to become more social might have caused a second layer of motivation to evolve. Less social animals animals can act purely based on physical considerations like the need to eat or avoid a predator, but for humans every action has potential social implications, so needs to also be evaluated in that light. There are some interesting anecdotes like Helen Keller’s account suggesting that she only developed a self after learning language. The description of her old state of being sounds like there was just the urge, which was then immediately acted upon; and that this mode of operation then became irreversibly altered:
Before my teacher came to me, I did not know that I am. [...] I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. [...] I never viewed anything beforehand or chose it. [...] My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith. [...]
I remember, also through touch, that I had a power of association. I felt tactual jars like the stamp of a foot, the opening of a window or its closing, the slam of a door. After repeatedly smelling rain and feeling the discomfort of wetness, I acted like those about me: I ran to shut the window. But that was not thought in any sense. It was the same kind of association that makes animals take shelter from the rain. From the same instinct of aping others, I folded the clothes that came from the laundry, and put mine away, fed the turkeys, sewed bead-eyes on my doll’s face, and did many other things of which I have the tactual remembrance. When I wanted anything I liked,—ice-cream, for instance, of which I was very fond,—I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers. [...]
I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.
Would also make sense in light of the observation that the sense of self may disappear when doing purely physical activities (you fall back to the original set of systems which doesn’t need to think about the self), the PRISM model of consciousness as a conflict-solver, the way that physical and social reasoning seem to be pretty distinct, and a kind of a semi-modular approach (you have the old primarily physical system, and then the new one that can integrate social considerations on top of the old system’s suggestions just added on top). If you squint, the stuff about simulacra also feels kinda relevant, as an entirely new set of implications that diverge from physical reality and need to be thought about on their own terms.
I wouldn’t be very surprised if this hypothesis turned out to be false, but at least there’s suggestive evidence.
Nice points. To start, there are a few subtleties involved.
One issue, which I thought I had discussed but which I apparently ended up deleting in an editing phase, is that while I have been referring to the Buddhist concept of dukkha as “suffering”, there are some issues with that particular translation. I have also been using the term “unsatisfactoriness”, which is better in some respects.
The issue is that when we say “suffering”, it tends to refer to a relatively strong experience: if you felt a tiny bit of discomfort from your left sock being slightly itchy, many people would say that this does not count as suffering, it’s just a bit of discomfort. But dukkha also includes your reaction to that kind of very slight discomfort.
Furthermore, you can even have dukkha that you are not conscious of. Often we think that suffering is a subjective experience, so something that you are conscious of by definition. Can you suffer of something without being conscious of the fact that you are suffering? I can avoid this kind of an issue by saying that dukkha is not exactly the same thing as our common-sense definition of suffering, and unlike the common-sense definition, it doesn’t always need to be conscious. Rather, dukkha is something like a training signal that is used by the brain to optimize its functioning and to learn to avoid states with a lot of dukkha: like any other signal in the brain, it has the strongest effect when the signal becomes strong enough to make it to conscious awareness, but it has an effect even if just unconscious.
One example of unconscious dukkha might be this. Sometimes there is a kind of a background discomfort or pain that you have gotten used to, and you think that you are just fine. But once something happens to make that background discomfort go away, you realize how much better you suddenly feel, and that you were actually not okay before.
My model is something like: craving comes in degrees. A lot of factors go into determining how strong it is. Whenever there is craving, there is also dukkha, but if the craving is very subtle, then the dukkha may also be very subtle. There’s a spectrum of how easy it is to notice, going roughly something like:
Only noticeable in extremely deep states of meditative absorption; has barely any effect on decision-making
Hovering near the threshold of conscious awareness, becoming noticeable if it disappears or when there’s nothing else going on that could distract you
Registers as a slight discomfort, but will be pushed away from consciousness by any distraction
Registers as a moderate discomfort that keeps popping up even as other things are going on
Experienced as suffering, obvious and makes it hard to focus on anything else
Extreme suffering, makes it impossible to think about anything else
So when you say that suffering seems to be most strongly associated with wanting conflicting things, I agree with that… that is, I agree that that tends to produce the strongest levels of craving (by making two strong cravings compete against each other), and thus the level of dukkha that we would ordinarily call “suffering”.
At the same time, I also think that there are levels of craving/dukkha that are much subtler, and which may be present even in the case of e.g. imagining a delicious food—they just aren’t strong enough to consciously register, or to have any other effect on decision-making; the main influence in those cases is from non-craving-based motivations. (When the craving is that subtle, there’s also a conflict, but rather than being a conflict between two cravings, it’s a conflict between a craving and how reality is—e.g. “I would like to eat that food” vs. “I don’t actually have any of that food right now”.)
I think there’s something like this going on, yes. I mentioned in my previous post that
One way that I’ve been thinking of this, is that a craving is a form of a hypothesis, in the predictive processing sense where hypotheses drive behavior by seeking to prove themselves true. For example, your visual system may see someone’s nose and form the hypothesis that “the thing that I’m seeing is a nose, and a nose is part of a person’s face, so I’m seeing someone’s face”. That contains the prediction “faces have eyes next to the nose, so if I look slightly up and to the right I will see an eye, and if I look left from there I will see another eye”; it will then seek to confirm its prediction by making you look at those spots and verify that they do indeed contain eyes.
This is closely related to two points that you’ve talked about before; that people form unconscious beliefs about what they need in order to be happy, and that the mind tends to generate filters which pick out features of experience that support the schema underlying the filter—sometimes mangling the input quite severely to make it fit the filter. The “I’m seeing a face” hypothesis is a filter that picks out the features—such as eyes—which support it. In terms of the above, once a craving hypothesis for X is triggered, it seeks to maintain the belief that happiness requires getting X, focusing on evidence which supports that belief. (To be clear, I’m not saying that all filters are created by craving; rather, craving is one subtype of such a filter.)
My model is that the brain has something like a “master template for craving hypotheses”. Whenever something triggers positive or negative valence, the brain “tries on” the generic template for craving (“I need to get / avoid this in order to be happy”) adapted to this particular source of valence. How strong of a craving is produced, depends on how much evidence can be found to support the hypothesis. If you just imagine a delicious food but aren’t particularly hungry, then there isn’t much of a reason to believe that you need it for your happiness, so the craving is pretty weak. If you are stressed out and seriously need to get some work done, then “I need to relax while I’m on my walk” has more evidence in its favor, so it produces a stronger craving.
One description for the effects of extended meditative practice is “you suffer less, but you notice it more”. Based on the descriptions and my own experience, I think this means roughly the following:
By doing meditative practices, you develop better introspective awareness and ability to pay attention to subtle nuances of what’s going on in your mind.
As your ability to do this improves, you become capable of seeing the craving in your mind more clearly.
All craving hypotheses are ultimately false, because they hold that craving is necessary for avoiding dukkha (discomfort), but actually craving is that which generates dukkha in the first place. Each craving hypothesis attributes dukkha to an external source, when it is actually an internally-generated error signal.
When your introspective awareness and equanimity sharpen enough, your mind can grab onto a particular craving without getting completely pulled into it. This allows you to see that the craving is trying to avoid discomfort, and that it is also creating discomfort by doing so.
Seeing both of these at the same time proves the craving hypothesis false, triggering memory reconsolidation and eliminating the craving.
In order to see the craving clearly enough to eliminate it, your introspective awareness had to become sharper and more capable of magnifying subtle signals to the level of conscious awareness. As a result, as you eliminate strong and moderate-strength cravings, the “detection threshold” for when a craving and its associated dukkha is strong enough to become consciously detectable drops. Cravings and discomforts which were previously too subtle to notice, now start appearing in consciousness.
The end result is that you have less dukkha (suffering) overall, but become better at noticing those parts of it that you haven’t eliminated yet.
There are some similarities between working with craving, and the kind of work with the moral judgment system that you discussed in your post about it. That is, we have learned rules/beliefs which trigger craving in particular situations, just as we have learned rules/beliefs which trigger moral judgment in some situations. As with moral judgment, craving is a system in the brain that cannot be eliminated entirely, and lots of its specific instances need to be eliminated separately—but there are also interventions deeper in the belief network that propagate more widely, eliminating more cravings.
One particular problem with eliminating craving is that even as you eliminate particular instances of it, new craving keeps being generated, as the underlying beliefs about its usefulness are slow to change even as special cases get repeatedly disproven. The claim from Buddhist psychology, which my experience causes me to consider plausible, is that the beliefs which cause cravings to be learned are entangled with beliefs about the self. Changing the beliefs which form the self-model cause changes to craving—as the conception of “I” changes, that changes the kinds of evidence which are taken to support the hypothesis of “I need X to be happy”. Drastic enough updates to the self-model can cause a significant reduction in the amount of craving that is generated, to the point that one can unlearn it faster than it is generated.
So for craving, indirection can certainly make it stronger, but at its most basic it’s held to be a very low-level response to any valence. Physical pain and discomfort is the most obvious example: pain is very immediate and present, but if becomes experienced as less self-related, it too becomes less aversive. In an earlier comment, I described an episode in which my sense of self seemed to become temporarily suspended; the result was that strong negative valence (specifically cold shock from an icy shower) was experienced just as strongly and acutely as before, but it lacked the aversive element—I got out of the shower because I was concerned about the health effects of long-term exposure, but could in principle have remained there for longer if I had wanted. I have had other similar experiences since then, but that one was the most dramatic illustration.
In the case of physical pain, the hypothesis seems to be something like “I have to get this sensation of pain out of my consciousness in order to feel good”. If that hypothesis is suspended, one still experiences the sensation of pain, but without the need to get it out of their mind.
(This sometimes feels really weird—you have a painful sensation in your mind, and it feels exactly as painful as always, and you keep expecting yourself to flinch away from it right now… except, you just never do. It just feels really painful and the fact that it feels really painful also does not bother you at all, and you just feel totally confused.)
So there’s a subtlety in that the moral judgment system is separate from the craving system, but it does generate valence that the craving system also reacts to, so their operation gets kinda intermingled. (At least, that’s my working model—I haven’t seen any Buddhist theory that would explicitly make these distinctions, though honestly that may very well just be because I haven’t read enough of it.)
So something like:
You witness someone being wrong on the internet
The moral judgment system creates an urge to argue with them
Your mind notices this urge and forms the prediction that resisting it would feel unpleasant, and even though giving into it isn’t necessarily pleasant either, it’s at least less unpleasant than trying to resist the urge
There’s a craving to give in to the urge, consisting of the hypothesis that “I need to give in to this urge and prove the person on the internet wrong, or I will experience greater discomfort than otherwise”
The craving causes you to give in to the urge
This is a nice example of how cravings are often self-fulfilling prophecies. Experiencing a craving is unpleasant; when there is negative valence from resisting an urge, craving is generated which tries to resist that negative valence. The negative valence would not create discomfort by itself, but there is discomfort generated by the combination of “craving + negative valence”. The craving says that “if I don’t give in to the urge, there will be discomfort”… and as soon as you give in to the urge, the craving has gotten you to do what it “wanted” you to do, so it disappears and the discomfort that was associated with it disappears as well. So the craving just “proved” that you had to give in to the urge in order to avoid the discomfort from the negative valence… even though the discomfort was actually produced by the craving itself!
Whereas if you eliminated the craving to avoid this particular discomfort, then the discomfort from resisting the urge would also disappear. Note that this does not automatically mean that you would resist the urge: it just means that you’d have the option to, if you had some reason to do so. But falsifying the beliefs behind the craving is distinct from falsifying the beliefs that triggered the moral judgment system; you might still give in to the urge, if you believed it to be correct and justified. (This is part of my explanation for why it seems that you can reach high levels of enlightenment and see through the experience of the self, and still be a complete jerk towards others.)
This is all very interesting, but I can’t help but notice that this idea of valence doesn’t seem to be paying rent in predictions that are different from what I’d predict without it. And to the extent it does make different predictions, I don’t think they’re accurate, as they predict suffering or unsatisfactoriness where I don’t consciously experience it, and I don’t see what benefit there is to having an invisible dragon in that context.
I mean, sure, you can say there is a conflict between “I want that food” and “I don’t have it”, but this conflict can only arise (in my experience) if there is a different thought behind “I want”, like “I should”. If “I want” but “don’t have”, this state is readily resolved by either a plan to get it, or a momentary sense of loss in letting go of it and moving on to a different food.
In contrast, if “I should” but “don’t have”, then this actually creates suffering, in the form of a mental loop arguing that it should be there, but it isn’t, but it was there, but someone ate it, and they shouldn’t have eaten it, and so on, and so forth, in an undending loop of hard-to-resolve suffering and “unsatisfactoriness”.
In my model, I distinguish between these two kinds of conflict—trivially resolved and virtually irreconcilable—because only one of them is the type that people come to me for help with. ;-) More notably, only one can reasonably be called “suffering”, and it’s also the only one where meditation of some sort might be helpful, since the other will be over before you can start meditating on it. ;-)
If you want to try to reduce this idea further, one way of distinguishing these types of conflict is that “I want” means “I am thinking of myself with this thing in the future”, whereas “I should” means “I am thinking of myself with this thing in the past/present”.
Notice that only one of these thoughts is compatible with the reality of not having the thing in the present. I can not-have food now, and then have-food later. But I can’t not-have food now, and also have-food now, nor can I have-food in the past if I didn’t already. (No time travel allowed!)
Similarly, in clinging to positive things, we are imagining a future negative state, then rejecting it, insisting the positive thing should last forever. It’s not quite as obvious a causality violation as time travel, but it’s close. ;-)
I guess what I’m saying here is that ISTM we experience suffering when our “how things (morally or rightly) ought to be” model conflicts with our “how things actually are” model, by insisting that the past, present, or likely future are “wrong”. This model seems to me to be a lot simpler than all these hypotheses about valence and projections and self-reference and whatnot.
You say that :
But this seems like adding unnecessary epicycles. The idea of an “urge” does not require the extra steps of “predicting that resisting the urge would be unpleasant” or “having a craving to give in to the urge”, etc., because that’s what “having an urge” means. The other parts of this sequence are redundant; it suffices to say, “I have an urge to argue with that person”, because the urge itself combines both the itch and the desire to scratch it.
Notably, hypothesizing the other parts doesn’t seem to make sense from an evolutionary POV, as it is reasonable to assume that the ability to have “urges” must logically precede the ability to make predictions about the urges, vs. the urges themselves encoding predictions about the outside world. If we have evolved an urge to do something, it is because evolution already “thinks” it’s probably a good idea to do the thing, and/or a bad idea not to, so another mechanism that merely recapitulates this logic would be kind of redundant.
(Not that redundancy can’t happen! After all, our brain is full of it. But such redundancy as described here isn’t necessary to a logical model of craving or suffering, AFAICT.)
Well, whether or not a model is needlessly complex depends on what it needs to explain. :-)
Back when I started thinking about the nature of suffering, I also had a relatively simple model, basically boiling down to “suffering is about wanting conflicting things”. (Upon re-reading that post from nine years back, I see that I credit you for a part of the model that I outlined there. We’ve been at this for a while. :-)) I still had it until relatively recently. But I found that there were things which it didn’t really explain or predict. For example:
You can decouple valence and aversion, so that painful sensations appear just as painful as before, but do not trigger aversion.
Changes to the sense of self cause changes even to the aversiveness of things that don’t seem to be related to a self-model (e.g. physical pain).
You can learn to concentrate better by training your mind to notice that it keeps predicting that indulging in a distraction is going to eliminate the discomfort from the distracting urges, but that it could just as well just drop the distraction entirely.
There are mental moves that you can make to investigate craving, in such a way which causes the mind to notice that maintaining the craving is actually preventing it from feeling good, and then dropping it.
If you can get your mind into states in which there is little or no craving, then those states will feel intrinsically good without regard to their valence.
Upon investigation, you can notice that many states that you had thought were purely pleasant actually contain a degree of subtle discomfort; releasing the craving in those states then gets you into states that are more pleasant overall.
If you train your mind to have enough sensory precision, you can eventually come to directly observe how the mind carries out the kinds of steps that I described under “Let’s say that there is this kind of a process”: an experience being painted with valence, that valence triggering craving, a new self being fabricated by that craving, and so on.
From your responses, it’s not clear to me how much credibility you lend to these kinds of claims. If you feel that meditation doesn’t actually provide any real insight into how minds work and that I’m just deluded, then I think that that’s certainly a reasonable position to hold. I don’t think that that position is true, mind you, but it seems reasonable that you might be skeptical. After all, most of the research on the topic is low quality, there’s plenty of room for placebo and motivated reasoning effects, introspection is famously unreliable, et cetera.
But ISTM that if you are willing to at least grant that me and others who are saying these kinds of things are not outright lying about our subjective experience… then you need to at least explain how come it seems to us like the urge and the aversion from resisting the urge can become decoupled, or why it seems to us like reductions in the sense of self systematically lead to reductions in the aversiveness of negative valence.
I agree that if I were just developing a model of human motivation and suffering from first principles and from what seems to make evolutionary sense, I wouldn’t arrive at this kind of an explanation. “An urge directly combines an itch and the desire to scratch it” would certainly be a much more parsimonious model… but it would then predict that you can’t have an urge without a corresponding need to engage in it, and that prediction is contradicted both by my experience and the experience of many others who engage in these kinds of practices.
No, that’s a good point, as far as it goes. There does seem to be some sort of meta-process that you can use to decouple from craving regarding these things, though in my experience it seems to require continuous attention, like an actively inhibitory process. In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from, and I don’t think that’s predictively accurate.
Your points regarding what’s possible with meditation also make some sense… it’s just that I have trouble reconciling the obvious evolutionary model with “WTF is meditation doing?” in a way that doesn’t produce things that shouldn’t be there.
Consciously, I know it’s possible to become willing to experience things that you previously were unwilling to experience, and that this can eliminate aversion. I model this largely under the second major motivational mechanic, that of risk/reward, effort/payoff.
That is, that system can decide that some negative thing is “worth it” and drop conflict about it. And meditation could theoretically reset the threshold for that, since to some extent meditation is just sitting there, despite the lack of payoff and the considerable payoffs offered to respond to current urges. If this recalibrates the payoff system, it would make sense within my own model, and resolve the part where I don’t see how what you describe could be a truly conscious process, in the way that you made it sound.
IOW, I might more say that part of our motivational system is a module for determining what urges should be acted upon and which are not worth it, or perhaps that translates mind/body/external states into urges or lack thereof, and that you can retrain this system to have different baselines for what constitutes “urge”-ency. ;-) (And thus, a non-conscious version of “valence” in your model.)
That doesn’t quite work either, because ISTM that meditation changes the threshold for all urges, not just the specific ones trained. Also, the part about identification isn’t covered here either. It might be yet another system being trained, perhaps the elusive “executive function” system?
On the other hand, I find that the Investor (my name for the risk/reward, effort/payoff module) is easily tricked into dropping urges for reasons other than self-identification. For example, the Investor can be tricked into letting you get out of a warm bed into a cold night if you imagine you have already done so. By imagining that you are already cold, there is nothing to be gained by refraining from getting up, and this shifts the “valence”, as you call it, in favor of getting up, because the Investor fundamentally works on comparing projections against an “expected status quo”. So if you convince it that some other status quo is “expected”, it can be made to go along with almost anything.
And so I suppose if you imagine that it is not you who is the one who is going to be cold, then that might work just as well. Or perhaps making it “not me” somehow convinces the Investor that the changes in state are not salient to its evaluations?
Hm. Now that my attention has been drawn to this, it’s like an itch I need to scratch. :) I am wondering now, “Wait, why is the Investor so easily tricked?” And for that matter, given that it is so easily tricked, could the feats attributed to long-term meditation be accomplished in general using such tricks? Can I imagine my way to no-self and get the benefits without meditating, even if only temporarily?
Also, I wonder if I have been overlooking the possibility to use Investor mind-tricks to deal with task-switching inertia, which is very similar to having to get out of a warm bed. What if I imagine I have already changed tasks? Hm. Also, if I am imagining no-self, will starting unpleasant tasks be less aversive?
Okay, I’m off to experiment now. This is exciting!
I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.
With regards to the ‘mental motion’:
As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one’s mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect this part to the global workspace and thus consciousness, which allows noticing and influencing the decision. If this connection is strong enough and can be activated consciously, it can make sense to call this process a mental motion.
Cool. :)
An analogy that I might use is that learning to let go of craving, is kind of the opposite of the thing where you practice an effortful thing until it becomes automatic. Craving usually triggers automatically and outside your conscious control, but you can come to gradually increase your odds of being able to notice it, catch it, and do something about it.
“An actively inhibitory process” sounds accurate for some of the mental motions involved. Though merely just bringing more conscious attention to the process also seems to affect it, and in some cases interrupt it, even if you don’t actively inhibit it.
Not sure how I made it sound :-) but a good description might be “semi-conscious”, in the same sense that something like Focusing can be: you do it, something conscious comes up, and then a change might happen. Sometimes enough becomes consciously accessible that you can clearly see what it was about, sometimes you just get a weird sensation and know that something has shifted, without knowing exactly what.
Any results yet? :)
Eh. Sorta? I’ve been busy with clients the last few days, not a lot of time for experimenting. I have occasionally found myself, or rather, found not-myself, several times, almost entirely accidentally or incidentally. A little like a perspective shift changing between two possible interpretations of an image; or more literally, like a shift between first-person, and third-person-over-the-shoulder in a video game.
In the third person perspective, I can observe limbs moving, feel the keys under my fingers as they type, and yet I am not the one who’s doing it. (Which, I suppose, I never really was anyway.)
TBH, I’m not sure if it’s that I haven’t found any unpleasant experiences to try this on, or if it’s more that because I’ve been spontaneously shifting to this state, I haven’t found anything to be an unpleasant experience. :-)
Cool, that sounds like a mild no-self state alright. :) Though any strong valence is likely to trigger a self schema and pull you out of it, but it’s a question of practice.
Your description kinda reminds me of the approach in Loch Kelly’s The Way of Effortless Mindfulness; it has various brief practices that may induce states like the one that you describe. E.g. in this one, you imagine the kind of a relaxing state in which there is no problem to solve and the sense of self just falls away. (Directly imagining a no-self state is hard, because checking whether you are in a no-self state yet activates the self-schema. But if you instead imagine an external state which is likely to put you in a no-self state, you don’t get that kind of self-reference, no pun intended.)
Recently I’ve also gotten interested in the Alexander Technique, which seems to have a pretty straightforward series of steps for expanding your awareness and then getting your mind to just automatically do things in a way which feels like non-doing. It also seems to induce the kinds of states that you describe, of just watching oneself work, which I had previously only gotten from meditation.
A hypothesis that I’ve been considering, is whether the shift to become more social might have caused a second layer of motivation to evolve. Less social animals animals can act purely based on physical considerations like the need to eat or avoid a predator, but for humans every action has potential social implications, so needs to also be evaluated in that light. There are some interesting anecdotes like Helen Keller’s account suggesting that she only developed a self after learning language. The description of her old state of being sounds like there was just the urge, which was then immediately acted upon; and that this mode of operation then became irreversibly altered:
Would also make sense in light of the observation that the sense of self may disappear when doing purely physical activities (you fall back to the original set of systems which doesn’t need to think about the self), the PRISM model of consciousness as a conflict-solver, the way that physical and social reasoning seem to be pretty distinct, and a kind of a semi-modular approach (you have the old primarily physical system, and then the new one that can integrate social considerations on top of the old system’s suggestions just added on top). If you squint, the stuff about simulacra also feels kinda relevant, as an entirely new set of implications that diverge from physical reality and need to be thought about on their own terms.
I wouldn’t be very surprised if this hypothesis turned out to be false, but at least there’s suggestive evidence.