My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.
No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.
This simply means that “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents” shouldn’t be chosen as a referent for “Santa”. However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents”; moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I’m perfectly fine with letting the word “Santa” refer to this pattern (or set of patterns). Is there a problem with that?
OK, so let’s consider the set of neural patterns (and corresponding artificial signals/symbols) you refer to here… the patterns that the label “Santa” can be used to refer to. For convenience, I’m going to label that set of neural patterns N.
I mean here to distinguish N from the set of flesh-and-blood-living-at-the-North-Pole patterns that the label “Santa” can refer to. For convenience, I’m going to label that set of patterns S.
So, I agree that N exists, and I assume you agree that S does not exist.
You further say:
“I’m perfectly fine with letting the word “Santa” refer to this pattern (or set of patterns).”
...in other words, you’re fine with letting “Santa” refer to N, and not to S. Yes?
Is there a problem with that?
Well, yes, in that I don’t think it’s possible.
I mean, I think it’s possible to force “Santa” to refer to N, and not to S, and you’re making a reasonable effort at doing so here. And once you’ve done that, you can say “Santa exists” and communicate exists(N) but not communicate exists(S).
But I also think that without that effort being made what “Santa exists” will communicate is exists(S).
And I also think that one of the most reliable natural ways of expressing exists(N) but not communicate exists(S) is by saying “Santa doesn’t exist.”
Put another way: it’s as though you said to me that you’re perfectly fine with letting the word “fish” refer to cows. There’s no problem with that, particularly; if “fish” ends up referring to cows when allowed to, I’m OK with that. But my sense of English is that, in fact, “fish” does not end up referring to cows when allowed to, and when you say “letting” you really mean forcing.
That seems fair. What I was mostly trying to get at was a way to describe Santa without admitting his existence; for instance, I could say, “Santa wears a green coat!” and you’d be able to say, “That’s wrong!” without either of us ever claiming that Santa actually exists. In other words, we would be communicating information about N, but not S.
More generally speaking, this problem usually arises whenever a word has more than one meaning, and information about which meaning is being used when is conveyed through context. As usual, discussion of the meaning of words leaves out a lot of details about how humans actually communicate (for instance, an absolutely enormous amount of communication occurs through nonverbal channels). Overloaded words occur all the time in human communication, and Santa just happens to be one of these overloaded words; it occasionally refers to S, occasionally to N. Most of the time, you can tell which meaning is being used, but in a discussion of language, I agree I was being imprecise. The concept of overloading a word just didn’t occur to me at the time I was typing my original comment, for whatever reason.
And it is admittedly kind of funny that I can say “Superman is from Krypton, not from Vulcan!” and be understood as talking about a fictional character in a body of myth, but if I say “Superman really exists” nobody understands me the same way (though in the Superman mythos, Superman both really exists and is from Krypton). A parsing model that got that quirk right without special-case handling would really be on to something.
A way to communicate Exists(N) and not Exists(S) in a way that doesn’t depend on the context of the current conversation might be “”Santa” exists but Santa does not.” Of course, the existence of “Santa” is granted when “Santa does not exist” is understood by the other person, so this is really just a slightly less ambiguous way of saying “Santa does not exist”
This simply means that “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents” shouldn’t be chosen as a referent for “Santa”.
That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.
However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents”;
I’m not disputing that. What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I’m perfectly fine with letting the word “Santa” refer to this pattern (or set of patterns). Is there a problem with that?
Several.
Breaks the rule that referents are picked out by senses.
Entails map/territory confusions.
Blurs fiction/fact boundary.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Look, I think you’ve maybe forgotten that this conversation started when you took issue with this part of the article:
difference is that in all three of the first processes, the symbols in the brain correspond to objects in reality (or virtual reality).
To which Stuart replied:
If a human plays starcraft 2 and has a symbol for Protoss Carrier, does that mean the human’s symbol is suddenly ungrounded?
And then you said:
If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?
And from here the conversation branched off. Several comments in, and you have now managed to divert this conversation into a discussion on philosophy of language, all the while entirely ignoring the fact that your stated concerns are irrelevant to your original contention. Let’s take a look at each of your complaints:
An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent.
You have now utterly divorced this conversation from the issue which first prompted it. The confusion here stems from the fact that the traditional tale of “Santa” tells of a physical man who physically exists at the physical North Pole. None of that applies to virtual reality, which was the part of the article you originally took umbrage at. Nor is it the case for Stuart’s example of the Protoss Carrier in Starcraft 2. In these examples, objects in virtual reality/the computer model of the Protoss Carrier should “normatively be chosen as the referents” (as you phrased it) of the phrases “objects in virtual reality”/”the Protoss Carrier”.
What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
Breaks the rule that referents are picked out by senses.
Again, irrelevant to the original example.
Entails map/territory confusions.
Still irrelevant.
Blurs fiction/fact boundary.
Still irrelevant.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Still irrelevant, and you can easily tell from context besides.
Look, you’ve performed what is known as a conversational “bait-and-switch”, wherein you present one idea for discussion, and when another engages you on that idea, you back out and start talking about something that seems maybe-a-little-bit-possibly-slightly-tangentially-related-if-you-don’t-squint-at-it-too-hard. Stick to the topic at hand, please.
EDIT: And in fact, this entire confusion stems from your original use of the word “fiction”. You’ve been implicitly been using the word with two meanings in mind, in an analogous fashion to how we’ve been using “Santa” that refer to different things:
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and makes no claim to exist in physical reality. This is the definition you used when first addressing Stuart.
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality. This is the definition you began using when you first brought up Santa and unicorns, and it’s the definition you’ve been using ever since.
In retrospect, I should have seen that and called you out on it immediately, but I didn’t look too closely despite there being a nagging feeling that something strange was going on when I first read your comment. Let’s keep words from being overloaded, neh? That’s what happened with Santa, after all.
you have now managed to divert this conversation into a discussion on philosophy of language,
The discussion is about the Chinese Rooms, the .CR is about semantics, and philosophy of language is relevant to semantics, so I dont see the diversion.
None of that applies to virtual reality, which was the part of the article you originally took umbrage at.
I don’t see why not. VR isnt a place where things exist, it rather series on representation and intentionally ad much as a picture or novel.
What I didn’t say before was that VR is particularly lproblematicalin the case of the CR, because a CR that is grounding its symbols in VR could reconfigure itself with the VR run internally...you then have the paradox of a CR with grounded symbols supposedly, but no contact with the outside world.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
As I stated before, I don’t think there has to be one.
That gives me a way of cashing out the fact/fiction distinction. I don’t know what your alternative is, because you haven’t given one,
As before, you seem to be arguing from a tacit assumption that all terms must have referents.
Irrelevant....irrelevant.
I was arguing against the all-must-have-referents theory, and inasmuch as you are still using it, it is still relevant.
you can easily tell from context besides.
I was tacitly using the assumption, common in analytical philosophy, that theories .should be built on cleaned up versions of natural language...hence, “normatively”. How do you build theories, dxu?
Stick to the topic at hand, please.
So what do you think of the CR, dxu?
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality.
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it. Which adds nothing to neural representations.
That being the case, I .am saying a fictional term...
....has no referent...
2,....and is not intended to.
But these are not different theories. 2 is just a ramifications of 1, a second iteration.
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it.
Right, and terms for objects in virtual reality can refer to those representations. That’s the same thing I was getting at with my distinction involving Santa and neural patterns, except in this case, there’s no claim that the VR object exists in the physical world, and thus nothing to get confused about. Hence, your objection to Santa does not apply here.
The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
And how is the “all-must-have-referents theory” relevant to Stuart Armstrong’s original example which you first took issue with?
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
build theories by observing how real humans communicate.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Right, and terms for objects in virtual reality can refer to those representations
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,
I’m not quite sure on what you mean by that. I looked up the phrase, and it returned an 1897 article in The New York Sun, but besides the obvious fact that both my comment and the article deal with the existence (or non-existence) of Santa Claus, I’m not seeing a huge connection here. Could you possibly expand?
My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.
This simply means that “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents” shouldn’t be chosen as a referent for “Santa”. However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents”; moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I’m perfectly fine with letting the word “Santa” refer to this pattern (or set of patterns). Is there a problem with that?
My $0.02…
OK, so let’s consider the set of neural patterns (and corresponding artificial signals/symbols) you refer to here… the patterns that the label “Santa” can be used to refer to. For convenience, I’m going to label that set of neural patterns N.
I mean here to distinguish N from the set of flesh-and-blood-living-at-the-North-Pole patterns that the label “Santa” can refer to. For convenience, I’m going to label that set of patterns S.
So, I agree that N exists, and I assume you agree that S does not exist.
You further say:
...in other words, you’re fine with letting “Santa” refer to N, and not to S. Yes?
Well, yes, in that I don’t think it’s possible.
I mean, I think it’s possible to force “Santa” to refer to N, and not to S, and you’re making a reasonable effort at doing so here. And once you’ve done that, you can say “Santa exists” and communicate exists(N) but not communicate exists(S).
But I also think that without that effort being made what “Santa exists” will communicate is exists(S).
And I also think that one of the most reliable natural ways of expressing exists(N) but not communicate exists(S) is by saying “Santa doesn’t exist.”
Put another way: it’s as though you said to me that you’re perfectly fine with letting the word “fish” refer to cows. There’s no problem with that, particularly; if “fish” ends up referring to cows when allowed to, I’m OK with that. But my sense of English is that, in fact, “fish” does not end up referring to cows when allowed to, and when you say “letting” you really mean forcing.
That seems fair. What I was mostly trying to get at was a way to describe Santa without admitting his existence; for instance, I could say, “Santa wears a green coat!” and you’d be able to say, “That’s wrong!” without either of us ever claiming that Santa actually exists. In other words, we would be communicating information about N, but not S.
More generally speaking, this problem usually arises whenever a word has more than one meaning, and information about which meaning is being used when is conveyed through context. As usual, discussion of the meaning of words leaves out a lot of details about how humans actually communicate (for instance, an absolutely enormous amount of communication occurs through nonverbal channels). Overloaded words occur all the time in human communication, and Santa just happens to be one of these overloaded words; it occasionally refers to S, occasionally to N. Most of the time, you can tell which meaning is being used, but in a discussion of language, I agree I was being imprecise. The concept of overloading a word just didn’t occur to me at the time I was typing my original comment, for whatever reason.
(nods) Yes, agreed with all of this.
And it is admittedly kind of funny that I can say “Superman is from Krypton, not from Vulcan!” and be understood as talking about a fictional character in a body of myth, but if I say “Superman really exists” nobody understands me the same way (though in the Superman mythos, Superman both really exists and is from Krypton). A parsing model that got that quirk right without special-case handling would really be on to something.
The Sense/Reference distinction handles this all out of the box, without the assumption that only certain words have double meanings.
Eg the, the correct sense of Superman is being from Krypton. But Superman has no referent...is fictional , does not exist.
It also forces you to reject objects in virtual reality as “real”.
News to me. How?
A way to communicate Exists(N) and not Exists(S) in a way that doesn’t depend on the context of the current conversation might be “”Santa” exists but Santa does not.” Of course, the existence of “Santa” is granted when “Santa does not exist” is understood by the other person, so this is really just a slightly less ambiguous way of saying “Santa does not exist”
Slightly.
Yes, The not-exists(S) is explicit, in “there is no Santa ”, the exists(N) is implicit in the fact that listener and speaker understood each other.
That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.
I’m not disputing that. What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
Several.
Breaks the rule that referents are picked out by senses.
Entails map/territory confusions.
Blurs fiction/fact boundary.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Look, I think you’ve maybe forgotten that this conversation started when you took issue with this part of the article:
To which Stuart replied:
And then you said:
And from here the conversation branched off. Several comments in, and you have now managed to divert this conversation into a discussion on philosophy of language, all the while entirely ignoring the fact that your stated concerns are irrelevant to your original contention. Let’s take a look at each of your complaints:
You have now utterly divorced this conversation from the issue which first prompted it. The confusion here stems from the fact that the traditional tale of “Santa” tells of a physical man who physically exists at the physical North Pole. None of that applies to virtual reality, which was the part of the article you originally took umbrage at. Nor is it the case for Stuart’s example of the Protoss Carrier in Starcraft 2. In these examples, objects in virtual reality/the computer model of the Protoss Carrier should “normatively be chosen as the referents” (as you phrased it) of the phrases “objects in virtual reality”/”the Protoss Carrier”.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
Again, irrelevant to the original example.
Still irrelevant.
Still irrelevant.
Still irrelevant, and you can easily tell from context besides.
Look, you’ve performed what is known as a conversational “bait-and-switch”, wherein you present one idea for discussion, and when another engages you on that idea, you back out and start talking about something that seems maybe-a-little-bit-possibly-slightly-tangentially-related-if-you-don’t-squint-at-it-too-hard. Stick to the topic at hand, please.
EDIT: And in fact, this entire confusion stems from your original use of the word “fiction”. You’ve been implicitly been using the word with two meanings in mind, in an analogous fashion to how we’ve been using “Santa” that refer to different things:
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and makes no claim to exist in physical reality. This is the definition you used when first addressing Stuart.
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality. This is the definition you began using when you first brought up Santa and unicorns, and it’s the definition you’ve been using ever since.
In retrospect, I should have seen that and called you out on it immediately, but I didn’t look too closely despite there being a nagging feeling that something strange was going on when I first read your comment. Let’s keep words from being overloaded, neh? That’s what happened with Santa, after all.
The discussion is about the Chinese Rooms, the .CR is about semantics, and philosophy of language is relevant to semantics, so I dont see the diversion.
I don’t see why not. VR isnt a place where things exist, it rather series on representation and intentionally ad much as a picture or novel.
What I didn’t say before was that VR is particularly lproblematicalin the case of the CR, because a CR that is grounding its symbols in VR could reconfigure itself with the VR run internally...you then have the paradox of a CR with grounded symbols supposedly, but no contact with the outside world.
As I stated before, I don’t think there has to be one.
That gives me a way of cashing out the fact/fiction distinction. I don’t know what your alternative is, because you haven’t given one,
As before, you seem to be arguing from a tacit assumption that all terms must have referents.
I was arguing against the all-must-have-referents theory, and inasmuch as you are still using it, it is still relevant.
I was tacitly using the assumption, common in analytical philosophy, that theories .should be built on cleaned up versions of natural language...hence, “normatively”. How do you build theories, dxu?
So what do you think of the CR, dxu?
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it. Which adds nothing to neural representations.
That being the case, I .am saying a fictional term...
....has no referent...
2,....and is not intended to.
But these are not different theories. 2 is just a ramifications of 1, a second iteration.
...What?
And how is the “all-must-have-referents theory” relevant to Stuart Armstrong’s original example which you first took issue with?
I build theories by observing how real humans communicate.
Right, and terms for objects in virtual reality can refer to those representations. That’s the same thing I was getting at with my distinction involving Santa and neural patterns, except in this case, there’s no claim that the VR object exists in the physical world, and thus nothing to get confused about. Hence, your objection to Santa does not apply here.
From my first reply:
The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,
Isn’t that just the contention of “Yes, Virginia...”?
I’m not quite sure on what you mean by that. I looked up the phrase, and it returned an 1897 article in The New York Sun, but besides the obvious fact that both my comment and the article deal with the existence (or non-existence) of Santa Claus, I’m not seeing a huge connection here. Could you possibly expand?