I’m not entirely sure that we’re still disagreeing. I’m not claiming that fiction is the same as non-fictional entities. I’m saying that something functioning in the human world has to have a category called “fiction”, and to correctly see the contours of that category.
This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won’t have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.
Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it’s only testing verbal skills.
However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I’d conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?
I’d say they could .bd taken to be as grounded as ours. There is still a
problem with referential semantics, that neither we nor the AI can tell it isnt in VR.
Which itself feeds through into problem with empiricism and physicalism.
Since semantics is inherently tricky, there aren’t easy answers to the CR.
If you’re in VR and can never leave it or see evidence of if (eg a perfect Descartes’s demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.
Note that this is not the case for a “encyclopaedia Chinese Room”. We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we’d realise its symbols are not grounded in our reality (which may be VR, but that’s not relevant).
We should probably call it something like “causalism”: using the word “real” to mean “that with which we could, in principle, interact causally.” I include the “in principle” because there exist, for example, galaxies that are moving away from us so quickly they will someday leave our light-cone. We see their light today, and that’s a causal interaction with the past galaxy where it used to be, but we understand enough about object permanence that we believe we have solid reason to infer there still exists a galaxy moving along the trajectory we witnessed, even when we cannot interact with it directly.
I’m not entirely sure that we’re still disagreeing. I’m not claiming that fiction is the same as non-fictional entities. I’m saying that something functioning in the human world has to have a category called “fiction”, and to correctly see the contours of that category.
Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it’s only testing verbal skills.
However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I’d conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?
I’d say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.
Which itself feeds through into problem with empiricism and physicalism.
Since semantics is inherently tricky, there aren’t easy answers to the CR.
If you’re in VR and can never leave it or see evidence of if (eg a perfect Descartes’s demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.
Note that this is not the case for a “encyclopaedia Chinese Room”. We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we’d realise its symbols are not grounded in our reality (which may be VR, but that’s not relevant).
By hypothesis, it isnt the real reality. Effectively, you are defending physical realism by abandoning realism.
Pretty much, yes.
We should probably call it something like “causalism”: using the word “real” to mean “that with which we could, in principle, interact causally.” I include the “in principle” because there exist, for example, galaxies that are moving away from us so quickly they will someday leave our light-cone. We see their light today, and that’s a causal interaction with the past galaxy where it used to be, but we understand enough about object permanence that we believe we have solid reason to infer there still exists a galaxy moving along the trajectory we witnessed, even when we cannot interact with it directly.