The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
And how is the “all-must-have-referents theory” relevant to Stuart Armstrong’s original example which you first took issue with?
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
build theories by observing how real humans communicate.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Right, and terms for objects in virtual reality can refer to those representations
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,
From my first reply:
The linked article doesn’t reality demonstrate that. In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.
I’m bringing it up because you are. Its like you’re saying it’s OK for you to appeal to unjustified premises, but if I bring it up, I’m at fault for changing the subject.
If that means taking their statements at face value, without allowing for metaphor .or misleading phraseology....then I have to tell you that there is a thing called a cold that someone can catch, and a thing called a temper someone can lose.
Is that a fact? In particular is it is it a fact that people are referring in my technical sense of “refering” , and not in some loose and popular sense , eg talking about,