“Knowledge is the existence of a correspondence between map and territory; the nature of this correspondence has no bearing on whether it constitutes knowledge.”
“Knowledge is the existence of a certain kind of correspondence between map and territory; the nature of this correspondence is important, and determines whether it constitutes knowledge, and how much, and what of.”
Your observation that there are way too many possible correspondences suffices to refute #1. I am not convinced that you’ve offered much reason to reject #2, though of course #2 is of little use without some clarity as to what correspondences constitute knowledge and why. And I’m pretty sure that when people say things like “knowledge is map-territory correspondence” they mean something much more like #2 than like #1.
You’ve looked at some particular versions of #2 and found that they don’t work perfectly. It may be the case that no one has found a concrete version of #2 that does work; I am not familiar enough with the literature to say. But if you’re claiming that #2 is false (which “it just does not seem tenable that knowledge consists of a correspondence between map and territory” seems to me to imply), that seems to me to go too far.
#1 reminds me of a famous argument of Hilary Putnam’s against computational functionalism (i.e., the theory that being a particular mind amounts to executing a particular sort of computation) -- literally any physical object has a (stupid) mapping onto literally any Turing machine. I don’t think this argument is generally regarded as successful, though here too I’m not sure anyone has an actual concrete proposal for exactly what sort of correspondence between mental states and computations is good enough. In any case, the philosophical literature on this stuff might be relevant even though it isn’t directly addressing your question.
Some thoughts on #2.
Hitherto, arguably the only instances of “knowledge” as such have been (in) human minds. It is possible that “knowledge” is a useful term when applied specifically to humans (and might in that case be defined in terms of whatever specific mechanisms of map/territory correspondence our brains use) but that asking “does X know Y?” or “is X accumulating knowledge about Y?” is not a well-defined question if we allow X to be a machine intelligence, an alien, an archangel, etc.
It might happen that, if dealing with some particular class of thing very unlike human beings, the most effective abstractions in this area don’t include anything that quite corresponds to “knowledge”. (I don’t have in mind a particular way this would come about.)
It seems to me that the specific objections you’ve raised leave it entirely possible that some definition along the following lines—which is merely a combination of the notions you’ve said knowledge isn’t “just”—could work well:
Given
an agent X
some part or aspect or possession Y of X
some part or aspect of the world Z,
we say that “X is accumulating knowledge of Z in Y” when the following things are true.
There is increasing mutual information (or something like mutual information; I’m not sure that mutual information specifically is the exact right notion here) between Y and Z.
In “most” situations, X’s utility increases as that mutual information does. (That is: in a given situation S that X could be in, for any t let U(t) be the average of X’s utility over possible futures of S in which, shortly afterward, the mutual information is t; then “on the whole” U(t) is an increasing function. “On the whole” means something like “as we allow S to vary, the probability that this is so is close to 1″.)
To be clear, the above still leaves lots of important details unspecified, and I don’t know how to specify them. (What exactly do we count as an agent? Not all agenty things exactly have utility functions; what do we mean by “X’s utility”? Is it mutual information we want, or conditional entropy, or absolute mutual information, or what? What probability distributions are we using for these things? How do we cash out “on the whole”? What counts as “shortly afterward”? Etc.)
But I think these fuzzinesses correspond to genuine fuzziness in the concept of “knowledge”. We don’t have a single perfectly well defined notion of “knowledge”, and I don’t see any reason why we should expect that there is a single One True Notion out there. If any version of the above is workable, then probably many versions are, and probably many match about equally well with our intuitive idea of “knowledge” and provide about equal insight.
E.g., one of your counterexamples concerned a computer system that accumulates information (say, images taken by a camera) but doesn’t do anything with that information. Suppose now that the computer system does do something with the images, but it’s something rather simple-minded, and imagine gradually making it more sophisticated and making it use the data in a cleverer way. I suggest that as this happens, we should become more willing to describe the situation as involving “knowledge”, to pretty much the same extent as we become more willing to think of the computer system as an agent with something like utilities that increase as it gathers data. But different people might disagree about whether to say, in a given case, “nah, there’s no actual knowledge there” or not.
In one of your posts, you say something like “we mustn’t allow ourselves to treat notions like agent or mind as ontologically basic”. I agree, but I think it’s perfectly OK to treat some such notions as prerequisites for a definition of “knowledge”. You don’t want that merely-information-accumulating system to count as accumulating “knowledge”, I think precisely because it isn’t agenty enough, or isn’t conscious enough, or something. But if you demand that for something to count as an account of knowledge it needs to include an account of what it is to be an agent, or what it is to be conscious, then of course you are going to have trouble finding an acceptable account of knowledge; I don’t think this is really a difficulty with the notion of knowledge as such.
It might turn out that what we want in general is a set of mutually-dependent definitions: we don’t define “agent” and then define “knowledge” in terms of “agent”, nor vice versa, but we say that a notion K of knowledge and a notion A of agency (and a notion … of …, and etc. etc.) are satisfactory if they fit together in the right sort of way. Of course I have no concrete proposal for what the right sort of way is, but it seems worth being aware that this sort of thing might happen. In that case we might be able to derive reasonable notions of knowledge, agency, etc., by starting with crude versions, “substituting them in”, and iterating.
It seems worth distinguishing two propositions:
“Knowledge is the existence of a correspondence between map and territory; the nature of this correspondence has no bearing on whether it constitutes knowledge.”
“Knowledge is the existence of a certain kind of correspondence between map and territory; the nature of this correspondence is important, and determines whether it constitutes knowledge, and how much, and what of.”
Your observation that there are way too many possible correspondences suffices to refute #1. I am not convinced that you’ve offered much reason to reject #2, though of course #2 is of little use without some clarity as to what correspondences constitute knowledge and why. And I’m pretty sure that when people say things like “knowledge is map-territory correspondence” they mean something much more like #2 than like #1.
You’ve looked at some particular versions of #2 and found that they don’t work perfectly. It may be the case that no one has found a concrete version of #2 that does work; I am not familiar enough with the literature to say. But if you’re claiming that #2 is false (which “it just does not seem tenable that knowledge consists of a correspondence between map and territory” seems to me to imply), that seems to me to go too far.
#1 reminds me of a famous argument of Hilary Putnam’s against computational functionalism (i.e., the theory that being a particular mind amounts to executing a particular sort of computation) -- literally any physical object has a (stupid) mapping onto literally any Turing machine. I don’t think this argument is generally regarded as successful, though here too I’m not sure anyone has an actual concrete proposal for exactly what sort of correspondence between mental states and computations is good enough. In any case, the philosophical literature on this stuff might be relevant even though it isn’t directly addressing your question.
Some thoughts on #2.
Hitherto, arguably the only instances of “knowledge” as such have been (in) human minds. It is possible that “knowledge” is a useful term when applied specifically to humans (and might in that case be defined in terms of whatever specific mechanisms of map/territory correspondence our brains use) but that asking “does X know Y?” or “is X accumulating knowledge about Y?” is not a well-defined question if we allow X to be a machine intelligence, an alien, an archangel, etc.
It might happen that, if dealing with some particular class of thing very unlike human beings, the most effective abstractions in this area don’t include anything that quite corresponds to “knowledge”. (I don’t have in mind a particular way this would come about.)
It seems to me that the specific objections you’ve raised leave it entirely possible that some definition along the following lines—which is merely a combination of the notions you’ve said knowledge isn’t “just”—could work well:
Given
an agent X
some part or aspect or possession Y of X
some part or aspect of the world Z,
we say that “X is accumulating knowledge of Z in Y” when the following things are true.
There is increasing mutual information (or something like mutual information; I’m not sure that mutual information specifically is the exact right notion here) between Y and Z.
In “most” situations, X’s utility increases as that mutual information does. (That is: in a given situation S that X could be in, for any t let U(t) be the average of X’s utility over possible futures of S in which, shortly afterward, the mutual information is t; then “on the whole” U(t) is an increasing function. “On the whole” means something like “as we allow S to vary, the probability that this is so is close to 1″.)
To be clear, the above still leaves lots of important details unspecified, and I don’t know how to specify them. (What exactly do we count as an agent? Not all agenty things exactly have utility functions; what do we mean by “X’s utility”? Is it mutual information we want, or conditional entropy, or absolute mutual information, or what? What probability distributions are we using for these things? How do we cash out “on the whole”? What counts as “shortly afterward”? Etc.)
But I think these fuzzinesses correspond to genuine fuzziness in the concept of “knowledge”. We don’t have a single perfectly well defined notion of “knowledge”, and I don’t see any reason why we should expect that there is a single One True Notion out there. If any version of the above is workable, then probably many versions are, and probably many match about equally well with our intuitive idea of “knowledge” and provide about equal insight.
E.g., one of your counterexamples concerned a computer system that accumulates information (say, images taken by a camera) but doesn’t do anything with that information. Suppose now that the computer system does do something with the images, but it’s something rather simple-minded, and imagine gradually making it more sophisticated and making it use the data in a cleverer way. I suggest that as this happens, we should become more willing to describe the situation as involving “knowledge”, to pretty much the same extent as we become more willing to think of the computer system as an agent with something like utilities that increase as it gathers data. But different people might disagree about whether to say, in a given case, “nah, there’s no actual knowledge there” or not.
In one of your posts, you say something like “we mustn’t allow ourselves to treat notions like agent or mind as ontologically basic”. I agree, but I think it’s perfectly OK to treat some such notions as prerequisites for a definition of “knowledge”. You don’t want that merely-information-accumulating system to count as accumulating “knowledge”, I think precisely because it isn’t agenty enough, or isn’t conscious enough, or something. But if you demand that for something to count as an account of knowledge it needs to include an account of what it is to be an agent, or what it is to be conscious, then of course you are going to have trouble finding an acceptable account of knowledge; I don’t think this is really a difficulty with the notion of knowledge as such.
It might turn out that what we want in general is a set of mutually-dependent definitions: we don’t define “agent” and then define “knowledge” in terms of “agent”, nor vice versa, but we say that a notion K of knowledge and a notion A of agency (and a notion … of …, and etc. etc.) are satisfactory if they fit together in the right sort of way. Of course I have no concrete proposal for what the right sort of way is, but it seems worth being aware that this sort of thing might happen. In that case we might be able to derive reasonable notions of knowledge, agency, etc., by starting with crude versions, “substituting them in”, and iterating.