Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Do you think this helps the cause of naturalism?
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).