This simply means that “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents” shouldn’t be chosen as a referent for “Santa”.
That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.
However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of “an entity that is fat AND jolly AND lives at the North Pole AND delivers presents”;
I’m not disputing that. What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I’m perfectly fine with letting the word “Santa” refer to this pattern (or set of patterns). Is there a problem with that?
Several.
Breaks the rule that referents are picked out by senses.
Entails map/territory confusions.
Blurs fiction/fact boundary.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Look, I think you’ve maybe forgotten that this conversation started when you took issue with this part of the article:
difference is that in all three of the first processes, the symbols in the brain correspond to objects in reality (or virtual reality).
To which Stuart replied:
If a human plays starcraft 2 and has a symbol for Protoss Carrier, does that mean the human’s symbol is suddenly ungrounded?
And then you said:
If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?
And from here the conversation branched off. Several comments in, and you have now managed to divert this conversation into a discussion on philosophy of language, all the while entirely ignoring the fact that your stated concerns are irrelevant to your original contention. Let’s take a look at each of your complaints:
An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent.
You have now utterly divorced this conversation from the issue which first prompted it. The confusion here stems from the fact that the traditional tale of “Santa” tells of a physical man who physically exists at the physical North Pole. None of that applies to virtual reality, which was the part of the article you originally took umbrage at. Nor is it the case for Stuart’s example of the Protoss Carrier in Starcraft 2. In these examples, objects in virtual reality/the computer model of the Protoss Carrier should “normatively be chosen as the referents” (as you phrased it) of the phrases “objects in virtual reality”/”the Protoss Carrier”.
What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
Breaks the rule that referents are picked out by senses.
Again, irrelevant to the original example.
Entails map/territory confusions.
Still irrelevant.
Blurs fiction/fact boundary.
Still irrelevant.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Still irrelevant, and you can easily tell from context besides.
Look, you’ve performed what is known as a conversational “bait-and-switch”, wherein you present one idea for discussion, and when another engages you on that idea, you back out and start talking about something that seems maybe-a-little-bit-possibly-slightly-tangentially-related-if-you-don’t-squint-at-it-too-hard. Stick to the topic at hand, please.
EDIT: And in fact, this entire confusion stems from your original use of the word “fiction”. You’ve been implicitly been using the word with two meanings in mind, in an analogous fashion to how we’ve been using “Santa” that refer to different things:
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and makes no claim to exist in physical reality. This is the definition you used when first addressing Stuart.
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality. This is the definition you began using when you first brought up Santa and unicorns, and it’s the definition you’ve been using ever since.
In retrospect, I should have seen that and called you out on it immediately, but I didn’t look too closely despite there being a nagging feeling that something strange was going on when I first read your comment. Let’s keep words from being overloaded, neh? That’s what happened with Santa, after all.
you have now managed to divert this conversation into a discussion on philosophy of language,
The discussion is about the Chinese Rooms, the .CR is about semantics, and philosophy of language is relevant to semantics, so I dont see the diversion.
None of that applies to virtual reality, which was the part of the article you originally took umbrage at.
I don’t see why not. VR isnt a place where things exist, it rather series on representation and intentionally ad much as a picture or novel.
What I didn’t say before was that VR is particularly lproblematicalin the case of the CR, because a CR that is grounding its symbols in VR could reconfigure itself with the VR run internally...you then have the paradox of a CR with grounded symbols supposedly, but no contact with the outside world.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
As I stated before, I don’t think there has to be one.
That gives me a way of cashing out the fact/fiction distinction. I don’t know what your alternative is, because you haven’t given one,
As before, you seem to be arguing from a tacit assumption that all terms must have referents.
Irrelevant....irrelevant.
I was arguing against the all-must-have-referents theory, and inasmuch as you are still using it, it is still relevant.
you can easily tell from context besides.
I was tacitly using the assumption, common in analytical philosophy, that theories .should be built on cleaned up versions of natural language...hence, “normatively”. How do you build theories, dxu?
Stick to the topic at hand, please.
So what do you think of the CR, dxu?
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality.
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it. Which adds nothing to neural representations.
That being the case, I .am saying a fictional term...
....has no referent...
2,....and is not intended to.
But these are not different theories. 2 is just a ramifications of 1, a second iteration.
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it.
Right, and terms for objects in virtual reality can refer to those representations. That’s the same thing I was getting at with my distinction involving Santa and neural patterns, except in this case, there’s no claim that the VR object exists in the physical world, and thus nothing to get confused about. Hence, your objection to Santa does not apply here.
The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
And how is the “all-must-have-referents theory” relevant to Stuart Armstrong’s original example which you first took issue with?
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
build theories by observing how real humans communicate.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Right, and terms for objects in virtual reality can refer to those representations
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,
That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of “Santa”, and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.
I’m not disputing that. What I am saying is that such neural patterns are the referent of “neural other representing fat jolly man....”, not referents of “Santa”.
Several.
Breaks the rule that referents are picked out by senses.
Entails map/territory confusions.
Blurs fiction/fact boundary.
Inconsistent...sometimes “X” has referent X, sometimes it has referent “representation of X”
Look, I think you’ve maybe forgotten that this conversation started when you took issue with this part of the article:
To which Stuart replied:
And then you said:
And from here the conversation branched off. Several comments in, and you have now managed to divert this conversation into a discussion on philosophy of language, all the while entirely ignoring the fact that your stated concerns are irrelevant to your original contention. Let’s take a look at each of your complaints:
You have now utterly divorced this conversation from the issue which first prompted it. The confusion here stems from the fact that the traditional tale of “Santa” tells of a physical man who physically exists at the physical North Pole. None of that applies to virtual reality, which was the part of the article you originally took umbrage at. Nor is it the case for Stuart’s example of the Protoss Carrier in Starcraft 2. In these examples, objects in virtual reality/the computer model of the Protoss Carrier should “normatively be chosen as the referents” (as you phrased it) of the phrases “objects in virtual reality”/”the Protoss Carrier”.
What is the referent of “Protoss Carrier”, if not “computer-generated video game model of the Protoss Carrier”?
Again, irrelevant to the original example.
Still irrelevant.
Still irrelevant.
Still irrelevant, and you can easily tell from context besides.
Look, you’ve performed what is known as a conversational “bait-and-switch”, wherein you present one idea for discussion, and when another engages you on that idea, you back out and start talking about something that seems maybe-a-little-bit-possibly-slightly-tangentially-related-if-you-don’t-squint-at-it-too-hard. Stick to the topic at hand, please.
EDIT: And in fact, this entire confusion stems from your original use of the word “fiction”. You’ve been implicitly been using the word with two meanings in mind, in an analogous fashion to how we’ve been using “Santa” that refer to different things:
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and makes no claim to exist in physical reality. This is the definition you used when first addressing Stuart.
Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality. This is the definition you began using when you first brought up Santa and unicorns, and it’s the definition you’ve been using ever since.
In retrospect, I should have seen that and called you out on it immediately, but I didn’t look too closely despite there being a nagging feeling that something strange was going on when I first read your comment. Let’s keep words from being overloaded, neh? That’s what happened with Santa, after all.
The discussion is about the Chinese Rooms, the .CR is about semantics, and philosophy of language is relevant to semantics, so I dont see the diversion.
I don’t see why not. VR isnt a place where things exist, it rather series on representation and intentionally ad much as a picture or novel.
What I didn’t say before was that VR is particularly lproblematicalin the case of the CR, because a CR that is grounding its symbols in VR could reconfigure itself with the VR run internally...you then have the paradox of a CR with grounded symbols supposedly, but no contact with the outside world.
As I stated before, I don’t think there has to be one.
That gives me a way of cashing out the fact/fiction distinction. I don’t know what your alternative is, because you haven’t given one,
As before, you seem to be arguing from a tacit assumption that all terms must have referents.
I was arguing against the all-must-have-referents theory, and inasmuch as you are still using it, it is still relevant.
I was tacitly using the assumption, common in analytical philosophy, that theories .should be built on cleaned up versions of natural language...hence, “normatively”. How do you build theories, dxu?
So what do you think of the CR, dxu?
I didn’t say that. Things dont, properly speaking, exist in virtual reality, they are represented in it. Which adds nothing to neural representations.
That being the case, I .am saying a fictional term...
....has no referent...
2,....and is not intended to.
But these are not different theories. 2 is just a ramifications of 1, a second iteration.
...What?
And how is the “all-must-have-referents theory” relevant to Stuart Armstrong’s original example which you first took issue with?
I build theories by observing how real humans communicate.
Right, and terms for objects in virtual reality can refer to those representations. That’s the same thing I was getting at with my distinction involving Santa and neural patterns, except in this case, there’s no claim that the VR object exists in the physical world, and thus nothing to get confused about. Hence, your objection to Santa does not apply here.
From my first reply:
The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,