Can we define something’s function in terms of the selection pressure it’s going to face in the future?
Consider the case where the part of my brain that lights up for horses lights up for a cow at night. Do I have a broken horse detector or a working horse-or-cow-at-night detector?
I don’t really see how the past-selection-pressure account of function can be used to say that this brain region is malfunctioning when it lights up for cows at night. Lighting up for cows at night might never have been selected against (let’s assume no-one in my evolutionary history has ever seen a cow at night). And lighting up for this particular cow at night has been selected for just as much as lighting up for any particular horse has (i.e. it’s come about as a side effect of the fact that it usefully lit up for some different animals in the past).
But maybe the future-selection-pressure account could handle this? Suppose the fact that the region lit up for this cow causes me to die young, so future people become a bit more likely to have it just light up for horses. Or suppose it causes me to tell my friend I saw a horse, and he says, “That was a cow; you can tell from the horns,” and from then on it stops lighting up for cows at night. In either of these cases we can say that the behaviour of that brain region was going to be selected against, and so it was malfunctioning.
Whereas if lighting up for a cows-at-night as well as horses is selected for going forward, then I’m happy to say it’s a horse-or-cow-at-night detector even if it causes me to call a cow a horse sometimes.
Sometimes the point is specifically to not update on the additional information, because you don’t trust yourself to update on it correctly.
Classic example: “Projects like this usually take 6 months, but looking at the plan I don’t see why it couldn’t be done in 2… wait, no, I should stick to the reference class forecast.”