You make your case vividly. It points to a difficulty which arises, I think, in a number of recent LW posts.
We have two conceptions of ourselves which on the face of it are not compatible. On the one hand we think of ourselves as mere physical objects, albeit highly structured and dynamical objects, buffeted by the vicissitudes of environment and the laws of nature, and not in any deep way different than any other physical system in nature. On the other hand, we think of ourselves as agents, in control of our actions and properly subject to appraisal as more or less rational. I get that your point is that ‘you’ -the agent- play(s) no part in a certain class of actions, as these are determined entirely by physiological factors. Once this rock is got rolling (I mean, the explanation of behaviours in purely non-intentional, electro-chemical terms), however, how can it be stopped before it takes out every vestige of agency? That is, how do you delineate the class of actions which ‘you’ really do participate in, once you acknowledge ‘you’ are wholly unnecessary to the explanation of some? Where do you dig in your heels and insist on an explanatory role for an agent, and what sort of a thing will this agent be, in this context?
One alterative to the picture I think you’re suggesting (an alternative recommended in one version, eg, by Daniel Dennett in ″The Intentional Stance″) is to recognize that we have different, mutually incompatible ways of understanding and explaining ourselves. In one explanatory idiom, only scientifically respectable -ie verifiable- claims are made—we’re just chemicals and electricity. No place is made for agency, and the notion of an action’s being rational or in any even weak sense good or bad is as foreign as it would be to ascribe such properties to the internal changes of a single-celled organism.
In another explanatory idiom (used when we take the intentional stance), we do perceive people as agents with purposes and beliefs and desires, and explain their actions in terms of these together with the assumption of some measure of rationality. In this idiom, however, we do find agency even in the smallest actions, assuming these are not simply reflexive (a sneeze, say).
The point, though, is that the explanatory idioms are not mutually reducible—they’re ‘incommensurable’, if you like. You don’t try to reduce the terms of the latter to the former, because there is no way of fitting purposes or beliefs or desires or the agents who harbour them into the purely, merely physical world.
They are neither incompatible nor incommensurable. They are different levels of description. The levels of complexity are so different that it is not useful (nor, with our current level of knowledge, possible) to reduce the agent-level to the physical level; but the agent-level is theoretically and necessarily reducible to the physical level, even if we don’t currently know in detail how to do it. And a person’s beliefs and actions at the agent-level directly affect the physical level.
As I mentioned, there are reasons for thinking they’re incompatible. Something’s (someone’s) being a rational agent implies s/he has goals and that in light of those goals ought in given circumstances to do certain things. Physical science makes no place for goals or purposes or right or wrong in nature—there is no physical apparatus which can detect the rightness of an action. Your thought may be that rationality can be made sense of without recourse to goals/purposes or right and wrong. I don’t think this can be done. At best you’d be left with an ersatz which fails to capture what we mean by ‘rational’.
You make your case vividly. It points to a difficulty which arises, I think, in a number of recent LW posts.
We have two conceptions of ourselves which on the face of it are not compatible. On the one hand we think of ourselves as mere physical objects, albeit highly structured and dynamical objects, buffeted by the vicissitudes of environment and the laws of nature, and not in any deep way different than any other physical system in nature. On the other hand, we think of ourselves as agents, in control of our actions and properly subject to appraisal as more or less rational. I get that your point is that ‘you’ -the agent- play(s) no part in a certain class of actions, as these are determined entirely by physiological factors. Once this rock is got rolling (I mean, the explanation of behaviours in purely non-intentional, electro-chemical terms), however, how can it be stopped before it takes out every vestige of agency? That is, how do you delineate the class of actions which ‘you’ really do participate in, once you acknowledge ‘you’ are wholly unnecessary to the explanation of some? Where do you dig in your heels and insist on an explanatory role for an agent, and what sort of a thing will this agent be, in this context?
One alterative to the picture I think you’re suggesting (an alternative recommended in one version, eg, by Daniel Dennett in ″The Intentional Stance″) is to recognize that we have different, mutually incompatible ways of understanding and explaining ourselves. In one explanatory idiom, only scientifically respectable -ie verifiable- claims are made—we’re just chemicals and electricity. No place is made for agency, and the notion of an action’s being rational or in any even weak sense good or bad is as foreign as it would be to ascribe such properties to the internal changes of a single-celled organism.
In another explanatory idiom (used when we take the intentional stance), we do perceive people as agents with purposes and beliefs and desires, and explain their actions in terms of these together with the assumption of some measure of rationality. In this idiom, however, we do find agency even in the smallest actions, assuming these are not simply reflexive (a sneeze, say).
The point, though, is that the explanatory idioms are not mutually reducible—they’re ‘incommensurable’, if you like. You don’t try to reduce the terms of the latter to the former, because there is no way of fitting purposes or beliefs or desires or the agents who harbour them into the purely, merely physical world.
They are neither incompatible nor incommensurable. They are different levels of description. The levels of complexity are so different that it is not useful (nor, with our current level of knowledge, possible) to reduce the agent-level to the physical level; but the agent-level is theoretically and necessarily reducible to the physical level, even if we don’t currently know in detail how to do it. And a person’s beliefs and actions at the agent-level directly affect the physical level.
As I mentioned, there are reasons for thinking they’re incompatible. Something’s (someone’s) being a rational agent implies s/he has goals and that in light of those goals ought in given circumstances to do certain things. Physical science makes no place for goals or purposes or right or wrong in nature—there is no physical apparatus which can detect the rightness of an action. Your thought may be that rationality can be made sense of without recourse to goals/purposes or right and wrong. I don’t think this can be done. At best you’d be left with an ersatz which fails to capture what we mean by ‘rational’.