Using the terminology above, A here is “the patterns of motor control and attention control outputs that would does collectively make my muscles actually execute the standing-up action.”
And S(A) is “the patterns of motor control and attention control outputs that would does collectively make my muscles actually execute the standing-up action are in my awareness.” Meaning a representation of “awareness” is active together with the container-relationship and a representation of A. (I am still very unsure about how “awareness” is learned and represented.)[1]
Referring to 2.6.2, I agree with this:
[S(A) and A] are obviously strongly associated with each other. They can activate simultaneously. And even if they don’t, each tends to bring the other to mind, such that the valence of one influences the valence of the other.
and
For any action A where S(A) has positive valence, there’s often a two-step temporal sequence: [S(A) ; A actually happens]”
I agree that in this co-occurence sense “S(X) often summons a follow-on thought of X.” But it is not causing it, what “summon” might imply. This choice of word is maybe an indication of the uncertainty here.
Clearly, action A can happen without S(A) being present. In fact, actions are often more effectively executed if you don’t think too hard about them[citation needed]. An S(A) is not required. Maybe S(A) and A cooccur often, but that doesn’t imply causality. But, indeed, it would seem to be causal in the context of a homunculus model of action. Treating it as causal/vitalistic is predictive. The real reason is the co-occurrence of the thoughts, which can have a common cause, such as when the S(A) thought brings up additional associations that lead to higher valence thoughts/actions later (e.g., chains of S(A), A, S(A)->S(B), B).
Thus, S(A) isn’t really an “Intention to do A” per se but just as it says on the tin: “awareness of (expecting) A.” I would say it is only an “intention to do A” if the thought S(A) also includes the concept of intention—which is a concept tied to the homunculus and an intuitive model of agency.
I am still very unsure about how “awareness” is learned and represented. Above it says
the cortex, which has a limited computational capacity that gets deployed serially [...] When this aspect of the brain algorithm is itself incorporated into a generative model via predictive (a.k.a. self-supervised) learning, it winds up represented as an “awareness” concept,
but this doesn’t say how. The brain needs to observe something (sense, interoception) from which it can infer this. The pattern in what observations would that be? The serial processing is a property the brain can’t observe unless there is some way to combine/compare past and present “thoughts.” That’s why I have long thought that there has to be a feedback from the current thought back as input signal (thoughts as observations). Such a connection is not present in the brain-like model, but it might not be the only way. Another way would be via memory. If a thought is remembered, then one way of implementing memory would be to provide a representation of the remembered thought as input. In any case, there must be a relation between successive thoughts, otherwise they couldn’t influence each other.
It seems plausible that, in a sequence of events, awareness S(A) is a related to a pattern of A having occurred previously in the sequence (or being expected to occur).
The brain needs to observe something (sense, interoception) from which it can infer this. The pattern in what observations would that be?
(partly copying from my other comment) For example, consider the following fact.
FACT: Sometimes, I’m thinking about pencils. Other times, I’m not thinking about pencils.
Now imagine that there’s a predictive (a.k.a. self-supervised) learning algorithm which is tasked with predicting upcoming sensory inputs, by building generative models. The above fact is very important! If the predictive learning algorithm does not somehow incorporate that fact into its generative models, then those generative models will be worse at making predictions. For example, if I’m thinking about pencils, then I’m likelier to talk about pencils, and look at pencils, and grab a pencil, etc., compared to if I’m not thinking about pencils. So the predictive learning algorithm is incentivized (by its predictive loss function) to build a generative model that can represent the fact that any given concept might be active in the cortex at a certain time, or might not be.
That’s why I have long thought that there has to be a feedback from the current thought back as input signal (thoughts as observations). Such a connection is not present in the brain-like model, but it might not be the only way. Another way would be via memory.
I mean yeah obviously the cortex has various types of memory, and this fact is important for all kinds of things. :)
Clearly, action A can happen without S(A) being present. In fact, actions are often more effectively executed if you don’t think too hard about them[citation needed]. An S(A) is not required. Maybe S(A) and A cooccur often, but that doesn’t imply causality.
These sentences seem to suggest that either A’s are either always, or never, caused by a preceding S(A), and out of those two options, “never” is more plausible. But that’s a false dichotomy. I propose that sometimes they are and sometimes they aren’t caused by S(A).
By analogy, sometimes doors open because somebody pushed on them, and sometimes doors open without anyone pushing on them. Also, it’s possible for there to be a very windy day where the door would open with 30% probability in the absence of a person pushing on it, but opens with 85% probability if somebody does push on it. In that case, did the person “cause” the door to open? I would say yeah, they “partially caused it” to open, or “causally contributed to” the door opening, or “often cause the door to open”, or something like that. I stand by my claim that the self-reflective S(standing up), if sufficiently motivating, can “cause” me to then stand up, in that sense.
The fact that “actions are often more effectively executed if you don’t think too hard about them” is referring to the fact that if you have a learned skill, in the form of some optimized context-dependent temporal sequence of motor-control and attention-control commands, then self-reflective thoughts can interrupt and thus mess up that temporal sequence, just as people shouting random numbers can disrupt someone trying to count, or how you can’t sing two songs in your head simultaneously. A.k.a. the limited capacity of cortex processing. Whereas that section is more about whether some course-of-action (like saying something, wiggling your fingers, standing up, etc.) starts or not.
Flow states (post 4) are a great example of A’s happening without any S(A).
You give the example of the door that is sometimes pushed open, but let me give alternative analogies:
S(A):Forecaster: “The stock price of XYZ will rise tomorrow.” A: XYZ’s stock rises the next day.
S(A): Drill sergeat, “There will be exercises at 14:00 hours.” A: Military units start their exercises at the designated time.
S(A): Live commentator: “The rocket is leaving the launch pad.” A: A rocket launches from the ground.
Clearly, there is a reason for the co-occurrence, but it is not one causing the other. And it is useful to have the forecaster because making the prediction salient helps improve predictions. Making the drill time salient improves punctuality or routine or something. Not sure what the benefit of the rocket launch commentary is.
I agree that there is such a thing as two things occurring in sequence where the first doesn’t cause the second. But I don’t think this is one of those cases. Instead, I think there are strong reasons to believe that if S(A) is active and has positive valence, then that causally contributes to A tending to happen afterwards.
For example, if A = stepping into the ice-cold shower, then the object-level idea of A is probably generally negative-valence—it will feel unpleasant. But then S(A) is the self-reflective idea of myself stepping into the shower, and relatedly how stepping into the shower fits into my self-image and the narrative of my life etc., and so S(A) is positive valence.
I won’t necessarily wind up stepping into the shower (maybe I’ll chicken out), but if I do, then the main reason why I do is the fact that the S(A) thought was active in my mind immediately beforehand, and had positive valence. Right?
There is just one problem that Libet discovered: There is no time for S(A) to cause A.
My favorite example is throwing a ball: A is the releasing of the ball at the right moment to hit a target. This requires Millisecond precision of release. The S(A) is precisely timed to coincide with the release. It feels like you are releasing the ball at the moment your hand releases it. But that can’t be true because the signal from the brain alone takes longer than the duration of a thought. If your theory were right, you would feel the intention to release the ball and a moment later would have the sensation of the result happening.
Now, one solution around this would be to time-tag thoughts and reorder them afterwords, maybe in memory—a bit like out-of-order execution in CPUs handles parallel execution of sequential instructions. But I’m not sure that is what is going on or that you think it is.
So, my conclusion is that there is a common cause of both S(A) and A.
And my interpretation of Daniel Ingram’s comments is different from yours.
In Mind and Body, the earliest insight stage, those who know what to look for and how to leverage this way of perceiving reality will take the opportunity to notice the intention to breathe that precedes the breath, the intention to move the foot that precedes the foot moving, the intention to think a thought that precedes the thinking of the thought, and even the intention to move attention that precedes attention moving.
These “intentions to think/do” that Ingraham refers to are not things untrained people can notice. There are things in the mind that precede the S(A) and A and cause them but people normally can’t notice them and thus can’t be S(A). I say these precursors are the same things picked up in the Libet experiments and neurological measurements.
We’re definitely talking past each other somehow. For example, your statement “The S(A) is precisely timed to coincide with the release” is (to me) obviously false. In the case of “deciding to throw a ball”, A would be the time-extended action of throwing the ball, and S(A) would be me “making a decision of my free will” to throw the ball, which happens way before the release, indeed it happens before I even start moving my arm. Releasing the ball isn’t a separate “decision” but rather part of the already-decided course-of-action.
(Again, I’m definitely not arguing that every action is this kind of stereotypical [S(A); A] “intentional free will decision”, or even that most actions are. Non-examples include every action you take in a flow state, and indeed you could say that every day is full of little “micro-flow-states” that last for even just a few seconds when you’re doing something rather than self-reflecting.)
…Then after the fact, I might recall the fact that I released the ball at such-and-such moment. But that thought is not actually about an “action” for reasons discussed in §2.6.1.
I guess this will only stop when we have made our thoughts clear enough for an implementation that allows us to inspect the system for S(A) and A. Which is OK.
At least this has helped clarify that you think of S(A) to (often) precede A by a lot, which wasn’t clear to me. I think this complicates the analysis because of where to draw the line. Would it count if I imagine throwing the ball one day (S(A)) but executing it during the game the next day as I intend?
At least this has helped clarify that you think of S(A) to (often) precede A by a lot, which wasn’t clear to me.
Not really; instead, I think throwing the ball is a time-extended course of action, as most actions are. If I “decide” to say a sentence or sing a song, I don’t separately “decide” to say the next syllable, then “decide” to say the next syllable after that, etc.
What do you make of the Libet experiments?
He did a bunch of experiments, I’m not sure which ones you’re referring to. (The “conscious intentions” one?) The ones I’ve read about seem mildly interesting. I don’t think they contradict anything I wrote or believe. If you do think that, feel free to explain. :)
I mean this (my summary of the Libet experiments and their replications):
Brain activity detectable with EEG (Readiness Potential)begins between 350 and multiple seconds (depending on experiment and measurement resolution)before the person consciously feels the intention to act (voluntary motor movement).
Subjects report becoming aware of their intention to act (via clock tracking) about 200 ms before the action itself (e.g., pressing a button). 200ms seems relatively fixed, but cognitive load can delay.
To give a specific quote:
Matsuhashi and Hallet: Our result suggests that the perception of intention rises through multiple levels of awareness, starting just after the brain initiates movement.
[...]
1. The first detected event in most subjects was the onset of BP. They were not aware of the movement genesis at this time, even if they were alerted by tones. 2. As the movement genesis progressed, the awareness state rose higher and after the T time, if the subjects were alerted, they could consciously access awareness of their movement genesis as intention. The late BP began within this period. 3. The awareness state rose even higher as the process went on, and at the W time it reached the level of meta-awareness without being probed. In Libet et al’s clock task, subjects could memorize the clock position at this time. 4. Shortly after that, the movement genesis reached its final point, after which the subjects could not veto the movement any more (P time).
[...]
We studied the immediate intention directly preceding the action. We think it best to understand movement genesis and intention as separate phenomena, both measurable. Movement genesis begins at a level beyond awareness and over time gradually becomes accessible to consciousness as the perception of intention.
Now, I think you’d say that what they measured wasn’t S(A) but something else that is causally related, but then you are moving farther away from patterns we can observe in the brain. And your theory still has to explain the subclass of those S(A) that they did measure. The participants apparently thought these to be their decisions S(A) about their actions A.
I don’t think S(A) or any other thought bursts into consciousness from the void via an acausal act of free will—that was the point of §3.3.6. I also don’t think that people’s self-reports about what was going on in their heads in the immediate past should necessarily be taken at face value—that was the point of §2.3.
Every thought (including S(A)) begins its life as a little seed of activation pattern in some little part of the cortex, which gets gradually stronger and more widespread across the global workspace over the course of a fraction of a second. If that process gets cut off prematurely, then we don’t become aware of that thought at all, although sometimes we can notice its footprints via an appropriate attention-control query.
Does that help?
Maybe you’re thinking that, if I assert that a positive-valence S(A) caused A to happen, then I must believe that there’s nothing upstream that in turn caused S(A) to appear and to have positive valence? If so, that seems pretty silly to me. That would be basically the position that nothing can ever cause anything, right?
(“Your Honor, the victim’s death was not caused by my client shooting him! Rather, The Big Bang is the common cause of both the shooting and the death!” :-D )
I think your explanation in section 8.5.2 resolves our disagreement nicely. You refer to S(X) thoughts that “spawn up” successive thoughts that eventually lead to X (I’d say X’) actions shortly after (or much later). While I was referring to S(X) that cannot give rise to X immediately. I think the difference was that you are more lenient with what X can be, such that S(X) can be about an X that is happening much later, which wouldn’t work in my model of thoughts.
Intuitive model underlying that statement: There’s a frame (§2.2.3) “X wants Y” (§3.3.4). This frame is being invoked, with X as the homunculus, and Y as the concept of “inside” as a location / environment.
How I describe what’s happening using my framework: There’s a systematic pattern (in this particular context), call it P, where self-reflective thoughts concerning the inside, like “myself being inside” or “myself going inside”, tend to trigger positive valence. That positive valence is why such thoughts arise in the first place, and it’s also why those thoughts tend to lead to actual going-inside behavior.
In my framework, that’s really the whole story. There’s this pattern P. And we can talk about the upstream causes of P—something involving innate drives and learned heuristics in the brain. And we can likewise talk about the downstream effects of P—P tends to spawn behaviors like going inside, brainstorming how to get inside, etc. But “what’s really going on” (in the “territory” of my brain algorithm) is a story about the pattern P, not about the homunculus. The homunculus only arises secondarily, as the way that I perceive the pattern P (in the “map” of my intuitive self-model).
Thanks. It doesn’t help because we already agreed on these points.
We both understand that there is physical process in the brain—neurons firing etc. - as you describe in 3.3.6, that gives rise to a) S(A), b) A, and c) the precursors to both as measured by Libet and others.
We both know that people’s self-reports are unreliable and informed by their intuitive self-models. To illustrate that I understand 2.3 let me give an example: My son has figured out that people hear what they expect to hear and experimented with leaving out fragments of words or sentences, enjoying himself by how people never noticed anything was off (example: “ood morning”). Here, the missing part doesn’t make it into people’s awareness despite the whole sentence very well does.
I’m not asserting that there is nothing upstream of S(A) that is causing it. I’m asserting that an individual S(A) is not causing A. I’m asserting so because it can’t timing-wise and equivalently, that there is no neurological action path from S(A) to A. The only relation between S(A) and A is that S(A) and A co-occurring has been statistically positive valence in the past. And this co-occurrence is facilitated by a common precursor. But saying S(A) is causing A is as right or wrong as saying A is causing S(A).
I want to comment on the interpretation of S(A) as an “intention” to do A.
Note that I’m coming back here from section 6. Awakening / Enlightenment / PNSE, so if somebody hasn’t read that, this might be unclear.
Using the terminology above, A here is “the patterns of motor control and attention control outputs that
woulddoes collectively make my muscles actually execute the standing-up action.”And S(A) is “the patterns of motor control and attention control outputs that
woulddoes collectively make my muscles actually execute the standing-up action are in my awareness.” Meaning a representation of “awareness” is active together with the container-relationship and a representation of A. (I am still very unsure about how “awareness” is learned and represented.)[1]Referring to 2.6.2, I agree with this:
and
I agree that in this co-occurence sense “S(X) often summons a follow-on thought of X.” But it is not causing it, what “summon” might imply. This choice of word is maybe an indication of the uncertainty here.
Clearly, action A can happen without S(A) being present. In fact, actions are often more effectively executed if you don’t think too hard about them[citation needed]. An S(A) is not required. Maybe S(A) and A cooccur often, but that doesn’t imply causality. But, indeed, it would seem to be causal in the context of a homunculus model of action. Treating it as causal/vitalistic is predictive. The real reason is the co-occurrence of the thoughts, which can have a common cause, such as when the S(A) thought brings up additional associations that lead to higher valence thoughts/actions later (e.g., chains of S(A), A, S(A)->S(B), B).
Thus, S(A) isn’t really an “Intention to do A” per se but just as it says on the tin: “awareness of (expecting) A.” I would say it is only an “intention to do A” if the thought S(A) also includes the concept of intention—which is a concept tied to the homunculus and an intuitive model of agency.
I am still very unsure about how “awareness” is learned and represented. Above it says
but this doesn’t say how. The brain needs to observe something (sense, interoception) from which it can infer this. The pattern in what observations would that be? The serial processing is a property the brain can’t observe unless there is some way to combine/compare past and present “thoughts.” That’s why I have long thought that there has to be a feedback from the current thought back as input signal (thoughts as observations). Such a connection is not present in the brain-like model, but it might not be the only way. Another way would be via memory. If a thought is remembered, then one way of implementing memory would be to provide a representation of the remembered thought as input. In any case, there must be a relation between successive thoughts, otherwise they couldn’t influence each other.
It seems plausible that, in a sequence of events, awareness S(A) is a related to a pattern of A having occurred previously in the sequence (or being expected to occur).
(partly copying from my other comment) For example, consider the following fact.
FACT: Sometimes, I’m thinking about pencils. Other times, I’m not thinking about pencils.
Now imagine that there’s a predictive (a.k.a. self-supervised) learning algorithm which is tasked with predicting upcoming sensory inputs, by building generative models. The above fact is very important! If the predictive learning algorithm does not somehow incorporate that fact into its generative models, then those generative models will be worse at making predictions. For example, if I’m thinking about pencils, then I’m likelier to talk about pencils, and look at pencils, and grab a pencil, etc., compared to if I’m not thinking about pencils. So the predictive learning algorithm is incentivized (by its predictive loss function) to build a generative model that can represent the fact that any given concept might be active in the cortex at a certain time, or might not be.
See also §1.4.
I mean yeah obviously the cortex has various types of memory, and this fact is important for all kinds of things. :)
These sentences seem to suggest that either A’s are either always, or never, caused by a preceding S(A), and out of those two options, “never” is more plausible. But that’s a false dichotomy. I propose that sometimes they are and sometimes they aren’t caused by S(A).
By analogy, sometimes doors open because somebody pushed on them, and sometimes doors open without anyone pushing on them. Also, it’s possible for there to be a very windy day where the door would open with 30% probability in the absence of a person pushing on it, but opens with 85% probability if somebody does push on it. In that case, did the person “cause” the door to open? I would say yeah, they “partially caused it” to open, or “causally contributed to” the door opening, or “often cause the door to open”, or something like that. I stand by my claim that the self-reflective S(standing up), if sufficiently motivating, can “cause” me to then stand up, in that sense.
The fact that “actions are often more effectively executed if you don’t think too hard about them” is referring to the fact that if you have a learned skill, in the form of some optimized context-dependent temporal sequence of motor-control and attention-control commands, then self-reflective thoughts can interrupt and thus mess up that temporal sequence, just as people shouting random numbers can disrupt someone trying to count, or how you can’t sing two songs in your head simultaneously. A.k.a. the limited capacity of cortex processing. Whereas that section is more about whether some course-of-action (like saying something, wiggling your fingers, standing up, etc.) starts or not.
Flow states (post 4) are a great example of A’s happening without any S(A).
You give the example of the door that is sometimes pushed open, but let me give alternative analogies:
S(A): Forecaster: “The stock price of XYZ will rise tomorrow.” A: XYZ’s stock rises the next day.
S(A): Drill sergeat, “There will be exercises at 14:00 hours.” A: Military units start their exercises at the designated time.
S(A): Live commentator: “The rocket is leaving the launch pad.” A: A rocket launches from the ground.
Clearly, there is a reason for the co-occurrence, but it is not one causing the other. And it is useful to have the forecaster because making the prediction salient helps improve predictions. Making the drill time salient improves punctuality or routine or something. Not sure what the benefit of the rocket launch commentary is.
Otherwise I think we agree.
I agree that there is such a thing as two things occurring in sequence where the first doesn’t cause the second. But I don’t think this is one of those cases. Instead, I think there are strong reasons to believe that if S(A) is active and has positive valence, then that causally contributes to A tending to happen afterwards.
For example, if A = stepping into the ice-cold shower, then the object-level idea of A is probably generally negative-valence—it will feel unpleasant. But then S(A) is the self-reflective idea of myself stepping into the shower, and relatedly how stepping into the shower fits into my self-image and the narrative of my life etc., and so S(A) is positive valence.
I won’t necessarily wind up stepping into the shower (maybe I’ll chicken out), but if I do, then the main reason why I do is the fact that the S(A) thought was active in my mind immediately beforehand, and had positive valence. Right?
There is just one problem that Libet discovered: There is no time for S(A) to cause A.
My favorite example is throwing a ball: A is the releasing of the ball at the right moment to hit a target. This requires Millisecond precision of release. The S(A) is precisely timed to coincide with the release. It feels like you are releasing the ball at the moment your hand releases it. But that can’t be true because the signal from the brain alone takes longer than the duration of a thought. If your theory were right, you would feel the intention to release the ball and a moment later would have the sensation of the result happening.
Now, one solution around this would be to time-tag thoughts and reorder them afterwords, maybe in memory—a bit like out-of-order execution in CPUs handles parallel execution of sequential instructions. But I’m not sure that is what is going on or that you think it is.
So, my conclusion is that there is a common cause of both S(A) and A.
And my interpretation of Daniel Ingram’s comments is different from yours.
These “intentions to think/do” that Ingraham refers to are not things untrained people can notice. There are things in the mind that precede the S(A) and A and cause them but people normally can’t notice them and thus can’t be S(A). I say these precursors are the same things picked up in the Libet experiments and neurological measurements.
We’re definitely talking past each other somehow. For example, your statement “The S(A) is precisely timed to coincide with the release” is (to me) obviously false. In the case of “deciding to throw a ball”, A would be the time-extended action of throwing the ball, and S(A) would be me “making a decision of my free will” to throw the ball, which happens way before the release, indeed it happens before I even start moving my arm. Releasing the ball isn’t a separate “decision” but rather part of the already-decided course-of-action.
(Again, I’m definitely not arguing that every action is this kind of stereotypical [S(A); A] “intentional free will decision”, or even that most actions are. Non-examples include every action you take in a flow state, and indeed you could say that every day is full of little “micro-flow-states” that last for even just a few seconds when you’re doing something rather than self-reflecting.)
…Then after the fact, I might recall the fact that I released the ball at such-and-such moment. But that thought is not actually about an “action” for reasons discussed in §2.6.1.
I guess this will only stop when we have made our thoughts clear enough for an implementation that allows us to inspect the system for S(A) and A. Which is OK.
At least this has helped clarify that you think of S(A) to (often) precede A by a lot, which wasn’t clear to me. I think this complicates the analysis because of where to draw the line. Would it count if I imagine throwing the ball one day (S(A)) but executing it during the game the next day as I intend?
What do you make of the Libet experiments?
Not really; instead, I think throwing the ball is a time-extended course of action, as most actions are. If I “decide” to say a sentence or sing a song, I don’t separately “decide” to say the next syllable, then “decide” to say the next syllable after that, etc.
He did a bunch of experiments, I’m not sure which ones you’re referring to. (The “conscious intentions” one?) The ones I’ve read about seem mildly interesting. I don’t think they contradict anything I wrote or believe. If you do think that, feel free to explain. :)
I mean this (my summary of the Libet experiments and their replications):
Brain activity detectable with EEG (Readiness Potential) begins between 350 and multiple seconds (depending on experiment and measurement resolution) before the person consciously feels the intention to act (voluntary motor movement).
Subjects report becoming aware of their intention to act (via clock tracking) about 200 ms before the action itself (e.g., pressing a button). 200ms seems relatively fixed, but cognitive load can delay.
To give a specific quote:
Now, I think you’d say that what they measured wasn’t S(A) but something else that is causally related, but then you are moving farther away from patterns we can observe in the brain. And your theory still has to explain the subclass of those S(A) that they did measure. The participants apparently thought these to be their decisions S(A) about their actions A.
Thanks!
I don’t think S(A) or any other thought bursts into consciousness from the void via an acausal act of free will—that was the point of §3.3.6. I also don’t think that people’s self-reports about what was going on in their heads in the immediate past should necessarily be taken at face value—that was the point of §2.3.
Every thought (including S(A)) begins its life as a little seed of activation pattern in some little part of the cortex, which gets gradually stronger and more widespread across the global workspace over the course of a fraction of a second. If that process gets cut off prematurely, then we don’t become aware of that thought at all, although sometimes we can notice its footprints via an appropriate attention-control query.
Does that help?
Maybe you’re thinking that, if I assert that a positive-valence S(A) caused A to happen, then I must believe that there’s nothing upstream that in turn caused S(A) to appear and to have positive valence? If so, that seems pretty silly to me. That would be basically the position that nothing can ever cause anything, right?
(“Your Honor, the victim’s death was not caused by my client shooting him! Rather, The Big Bang is the common cause of both the shooting and the death!” :-D )
I think your explanation in section 8.5.2 resolves our disagreement nicely. You refer to S(X) thoughts that “spawn up” successive thoughts that eventually lead to X (I’d say X’) actions shortly after (or much later). While I was referring to S(X) that cannot give rise to X immediately. I think the difference was that you are more lenient with what X can be, such that S(X) can be about an X that is happening much later, which wouldn’t work in my model of thoughts.
Thanks. It doesn’t help because we already agreed on these points.
We both understand that there is physical process in the brain—neurons firing etc. - as you describe in 3.3.6, that gives rise to a) S(A), b) A, and c) the precursors to both as measured by Libet and others.
We both know that people’s self-reports are unreliable and informed by their intuitive self-models. To illustrate that I understand 2.3 let me give an example: My son has figured out that people hear what they expect to hear and experimented with leaving out fragments of words or sentences, enjoying himself by how people never noticed anything was off (example: “ood morning”). Here, the missing part doesn’t make it into people’s awareness despite the whole sentence very well does.
I’m not asserting that there is nothing upstream of S(A) that is causing it. I’m asserting that an individual S(A) is not causing A. I’m asserting so because it can’t timing-wise and equivalently, that there is no neurological action path from S(A) to A. The only relation between S(A) and A is that S(A) and A co-occurring has been statistically positive valence in the past. And this co-occurrence is facilitated by a common precursor. But saying S(A) is causing A is as right or wrong as saying A is causing S(A).