...while for B, “2” doesn’t really mean anything; it’s just a symbol that it blindly manipulates.
I think I understand what concepts you were gesturing towards with this example, but for me the argument doesn’t go through. The communication failure suggests to me that you might need to dissolve some questions around syntax, semantics, and human pscyhology. In the absence of clear understanding here I would fear a fallacy of equivocation on the term “meaning” in other contexts as well.
The problem is that B seems to output a “3” every single time it sees a “2″ in the input. By “3” it functionally communicates “there was a ‘2’ in the corresponding input” and presumably a “2” in the output functionally communicates some stable fact of the input such as that there was a “Q” in it.
This is a different functional meaning that A communicates, but the distance between algorithms A and B isn’t very far. One involves a little bit of code and the other involves a little bit more, but these are both relatively small scripts that can be computed using little more memory than is needed to store the input itself.
I could understand someone using the term “meaning” to capture a sense where A and B are both capable of meaning things because they functionally communicate something to a human observer by virtue of their stably predictable input/output relations. Equally, however, I would accept a sense where neither algorithm was capable of meaning something because (to take one trivial example of the way humans are able to “mean” or “not mean” something) neither algorithm is capable of internally determining the correct response, examining the speech context they find themselves in, and emitting either the correct response or a false response that better achieves their goals within that speech context (such as to elicit laughter or to deceive the listener).
You can’t properly ask algorithm A “Did you really mean that output?” and get back a sensible answer, because algorithm A has no English parsing abilities, nor a time varying internal state, nor social modeling processes capable of internally representing your understanding (or lack of understanding) of its own output, nor a compressed internal representation of goal outcomes (where fiddling with bits of the goal representation would leave algorithm A continuing to produce complex goal directed behavior except re-targeted at some other goal than the outcome it was aiming for before its goal representation was fiddled with).
I’d be willing to accept an argument that used “meaning” in a very mechanistic sense of “reliable indication” or a social sense of “honest communication where dishonest communication was possible” or even some other sense that you wanted to spell out and then re-use in a careful way in other arguments… But if you want to use a primitive sense of “meaning” that applies to a calculator and then claim that that is what I do when I think or speak, then I don’t think I’ll find it very convincing.
My understanding of words like “meaning” and conjugations of “to be” starts from the assumption that they are levers for referring to the surface layer of enormously complex cognitive modeling tools for handling many radically different kinds of phenomena where it is convenient to paper over the complexity to order to get some job done, like dissecting precisely what a love interest “meant” when they said you “were being” coy. What “means” means, or what “were being” means in that sort of context is patently obvious to your average 13 year old… except that its really hard to spell that sort of thing out precisely enough to re-implement it in code or express the veridicality conditions in formal logic over primitive observables.
We are built to model minds, just like we are built to detect visual edges. We do these tasks wonderfully and we introspect on them terribly, which means re-using a concept like “meaning” in your foundational moral philosophy is asking for trouble :-P
I think I understand what concepts you were gesturing towards with this example, but for me the argument doesn’t go through. The communication failure suggests to me that you might need to dissolve some questions around syntax, semantics, and human pscyhology. In the absence of clear understanding here I would fear a fallacy of equivocation on the term “meaning” in other contexts as well.
The problem is that B seems to output a “3” every single time it sees a “2″ in the input. By “3” it functionally communicates “there was a ‘2’ in the corresponding input” and presumably a “2” in the output functionally communicates some stable fact of the input such as that there was a “Q” in it.
This is a different functional meaning that A communicates, but the distance between algorithms A and B isn’t very far. One involves a little bit of code and the other involves a little bit more, but these are both relatively small scripts that can be computed using little more memory than is needed to store the input itself.
I could understand someone using the term “meaning” to capture a sense where A and B are both capable of meaning things because they functionally communicate something to a human observer by virtue of their stably predictable input/output relations. Equally, however, I would accept a sense where neither algorithm was capable of meaning something because (to take one trivial example of the way humans are able to “mean” or “not mean” something) neither algorithm is capable of internally determining the correct response, examining the speech context they find themselves in, and emitting either the correct response or a false response that better achieves their goals within that speech context (such as to elicit laughter or to deceive the listener).
You can’t properly ask algorithm A “Did you really mean that output?” and get back a sensible answer, because algorithm A has no English parsing abilities, nor a time varying internal state, nor social modeling processes capable of internally representing your understanding (or lack of understanding) of its own output, nor a compressed internal representation of goal outcomes (where fiddling with bits of the goal representation would leave algorithm A continuing to produce complex goal directed behavior except re-targeted at some other goal than the outcome it was aiming for before its goal representation was fiddled with).
I’d be willing to accept an argument that used “meaning” in a very mechanistic sense of “reliable indication” or a social sense of “honest communication where dishonest communication was possible” or even some other sense that you wanted to spell out and then re-use in a careful way in other arguments… But if you want to use a primitive sense of “meaning” that applies to a calculator and then claim that that is what I do when I think or speak, then I don’t think I’ll find it very convincing.
My understanding of words like “meaning” and conjugations of “to be” starts from the assumption that they are levers for referring to the surface layer of enormously complex cognitive modeling tools for handling many radically different kinds of phenomena where it is convenient to paper over the complexity to order to get some job done, like dissecting precisely what a love interest “meant” when they said you “were being” coy. What “means” means, or what “were being” means in that sort of context is patently obvious to your average 13 year old… except that its really hard to spell that sort of thing out precisely enough to re-implement it in code or express the veridicality conditions in formal logic over primitive observables.
We are built to model minds, just like we are built to detect visual edges. We do these tasks wonderfully and we introspect on them terribly, which means re-using a concept like “meaning” in your foundational moral philosophy is asking for trouble :-P