Does it apply to explanations made in advance of the actions? For example, this evening (it is presently morning) I intend buying groceries on my way home from work, because there’s stuff I need and this is a convenient opportunity to get it. When I do it, that will be the explanation.
In the quoted article, the explanation he presents as a paradigmatic example of his general thesis is the reflex of jumping away from rustles in the grass. He presents an evolutionary just-so story to explain it, but one which fails to explain why I do not jump away from rustles in the grass, although surely I have much the same evolutionary background as he. I am more likely to peer closer to see what small creature is scurrying around in there. But then, I have never lived anywhere that snakes are a danger. He has.
And yet this, and split-brain experiments, are the examples he cites to say that “often”, we shouldn’t listen to anyone’s explanations of their behaviour.
If you were to have asked me why I had jumped, I would have replied that I thought I’d seen a snake. The reality, however, is that I jumped way before I was conscious of the snake.
I smell crypto-dualism. “I thought there was a snake” seems to me a perfectly good description of the event, even given that I jumped way before I was conscious of the snake. (He has “I thought I’d seen a snake”, but this is a fictional example, and I can make up fiction as well as he can.)
The article references his book. Anyone read it? The excerpts I’ve skimmed on Amazon just consist of more evidence that we are brains: the Libet experiments, the perceived simultaneity of perceptions whose neural signals aren’t, TMS experiments, and so on. There are some digressions into emergence, chaos, and quantum randomness. Then—this is his innovation, highlighted in the publisher’s blurb—he sees responsibility as arising from social interaction. Maybe I’m missing something in the full text, but is he saying that someone alone really is just an automaton, and only in company can one really be a person?
I believe there are people like that, who only feel alive in company and feel diminished when alone. Is this is just an example of someone mistaking their idiosyncratic mental constitution for everybody’s?
There are many circumstances that might have prevented it; but none of them happened. There are many others that might have obstructed it; but I would have changed my actions to achieve the goal.
Goals of such a simple sort are almost invariably achieved.
This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them.
It seems to me that “cognitive processes” could be replaced by “physical surroundings”, and the resulting statement would still be true. I am not sure how significant these findings are. We have imperfect knowledge of ourselves, but we have imperfect knowledge of everything.
You seem to have failed to notice the key point. Here’s a slight rephrasing of it: “explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes.”
You ask, does Gazzaniga’s explanation apply to explanations made in advance of the actions? The key point I’ve highlighted answers that question. In particular, your explanation of the actions you plan to take are (well, seem to me to be) the result of conscious processes. You consciously apprehended that you need groceries and consciously formulated a plan to fulfill that need.
It seems to me that in common usage, when a person says “I thought there was a snake” they mean something closer to, “I thought I consciously apprehended the presence of a snake,” than, “some low-level perceptual processing pattern-matched ‘snake’ and sent motor signals for retreating before I had a chance to consider the matter consciously.”
“explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes.”
Yes, he says that. And then he says:
listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time.
thus extending the anecdote of snakes in the grass to a parable that includes politicans’ speeches.
It seems to me that in common usage, when a person says “I thought there was a snake” they mean something closer to, “I thought I consciously apprehended the presence of a snake,” than, “some low-level perceptual processing pattern-matched ‘snake’ and sent motor signals for retreating before I had a chance to consider the matter consciously.”
Or perhaps they mean “I heard a sound that might be a snake”. As long as we’re just making up scenarios, we can slant them to favour any view of consciousness we want. This doesn’t even rise to the level of anecdote.
Does that apply to that explanation as well?
Does it apply to explanations made in advance of the actions? For example, this evening (it is presently morning) I intend buying groceries on my way home from work, because there’s stuff I need and this is a convenient opportunity to get it. When I do it, that will be the explanation.
In the quoted article, the explanation he presents as a paradigmatic example of his general thesis is the reflex of jumping away from rustles in the grass. He presents an evolutionary just-so story to explain it, but one which fails to explain why I do not jump away from rustles in the grass, although surely I have much the same evolutionary background as he. I am more likely to peer closer to see what small creature is scurrying around in there. But then, I have never lived anywhere that snakes are a danger. He has.
And yet this, and split-brain experiments, are the examples he cites to say that “often”, we shouldn’t listen to anyone’s explanations of their behaviour.
I smell crypto-dualism. “I thought there was a snake” seems to me a perfectly good description of the event, even given that I jumped way before I was conscious of the snake. (He has “I thought I’d seen a snake”, but this is a fictional example, and I can make up fiction as well as he can.)
The article references his book. Anyone read it? The excerpts I’ve skimmed on Amazon just consist of more evidence that we are brains: the Libet experiments, the perceived simultaneity of perceptions whose neural signals aren’t, TMS experiments, and so on. There are some digressions into emergence, chaos, and quantum randomness. Then—this is his innovation, highlighted in the publisher’s blurb—he sees responsibility as arising from social interaction. Maybe I’m missing something in the full text, but is he saying that someone alone really is just an automaton, and only in company can one really be a person?
I believe there are people like that, who only feel alive in company and feel diminished when alone. Is this is just an example of someone mistaking their idiosyncratic mental constitution for everybody’s?
Did you in fact buy the groceries?
I did.
There are many circumstances that might have prevented it; but none of them happened. There are many others that might have obstructed it; but I would have changed my actions to achieve the goal.
Goals of such a simple sort are almost invariably achieved.
Three upvotes for demonstrating the basic competence to buy groceries?
There is a famous study that digs a bit deeper and convincingly demonstrates it: Telling more than we can know: Verbal reports on mental processes.
From the abstract:
It seems to me that “cognitive processes” could be replaced by “physical surroundings”, and the resulting statement would still be true. I am not sure how significant these findings are. We have imperfect knowledge of ourselves, but we have imperfect knowledge of everything.
Obviously not, since Gazzaniga is not explaining his own actions.
He is, among other things, explaining some of his own actions: his actions of explaining his actions.
You seem to have failed to notice the key point. Here’s a slight rephrasing of it: “explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes.”
You ask, does Gazzaniga’s explanation apply to explanations made in advance of the actions? The key point I’ve highlighted answers that question. In particular, your explanation of the actions you plan to take are (well, seem to me to be) the result of conscious processes. You consciously apprehended that you need groceries and consciously formulated a plan to fulfill that need.
It seems to me that in common usage, when a person says “I thought there was a snake” they mean something closer to, “I thought I consciously apprehended the presence of a snake,” than, “some low-level perceptual processing pattern-matched ‘snake’ and sent motor signals for retreating before I had a chance to consider the matter consciously.”
Yes, he says that. And then he says:
thus extending the anecdote of snakes in the grass to a parable that includes politicans’ speeches.
Or perhaps they mean “I heard a sound that might be a snake”. As long as we’re just making up scenarios, we can slant them to favour any view of consciousness we want. This doesn’t even rise to the level of anecdote.