in the opinion of one of the authors, the gap can be explained by positing that humans are purely syntactic beings, but that have been selected by evolution such that human mental symbols correspond with real world objects and concepts -
Finding that difficult to process. Is the correspondence supposed to be some sort of coincidental, occasionalistic thing? But why shouldn’t a naturalist appeal to causation to ground symbols?
The causation would go as “beings with well grounded mental symbols are destroyed less often by the universe; beings with poorly grounded mental symbols are destroyed very often”.
That’s predictive accuracy. You can have predictive accuracy whilst badly misunderstanding the ontology of your perceived world. In fact,you van have predictive accuracy- doing the life—preserving thing in a given situation- without doing anything recogniseable as symbolic processing. And getting the ontology right is the more intuitive expansion of grounding a symbol, out of the options.
The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is “learning something” (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.
It’s always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge. And, I’d say, having the wrong model that gives the right predictions is just the same as having the right model with randomised labels. And since the labels are pretty meaningless anyway...
The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is “learning something” (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.
That seems to merely assert what I was arguing against .. I was arguing that preditictve accuracy is orthogonal to ontological correctness...and that grounding is to do with ontological correctness.
It’s always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge.
Right and wrong don’t have univocal meaning, here. A random model will have poor predictive accuracy, but you can still have two models of equivalent predictive accuracy, but different ontological implications.
And, I’d say, having the wrong model that gives the right predictions is just the same as having the randomized labels.
You seem to be picturing a model as a graph with labelled vertices, and assuming that two equally good models most have the same structure. That is not so.
For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles … although it is false, in the sense of lacking ontological accuracy, since epicycles don’t exist.
Another way is to notice that ontological revolutions can make merely modest changes to predictive abilities. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two,,
In that case, there is still, a difference in empirical predictiveness. In the extreme case there is not: you can have two ontologies that always make the sane predictions, the one being dual to the other. An example is wave particle duality in quantum mechanics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and Simulated Reality. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams’s Razor is not empirical.
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect—and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. A further proxy can always be inserted into a series of proxies. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix inhabitant could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game, knowing its internal physics.The science of the Matrix inhabitants would work, in a sense, but the workability of their science would be limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality. But empiricism cannot tell us that we are not in the same situation.
In the words of Werner Heisenberg (Physics and Philosophy, 1958) “We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning”
We don’t seem to be disagreeing about anything factual. You just want grounding to be in “the fundamental ontology”, while I’m content with them being grounded in the set of everything we could observe. If you like, I’m using Occam or simplicity priors on ontologies; if there are real objects behind the ones we can observe but we never know about them, I’d still count our symbols as grounded. (that’s why I’d count virtual Napoleon’s symbols as being grounded in virtual Waterloo, incidentally)
Being relatively liberal about symbol grounding makes it easier to answer Searle, but harder to answer other people, such as people who think germs or atoms are just social constructs.
Exactly the sane....that is the point of predictive accuracy being orthogonal to ontological accuracy...you can vary the latter without affecting the firmer,
“just social constructs” is (almost always) not a purely ontological statement, though. And those who think that it’s a social construct, but that the predictions of germ theories are still accurate… well, it doesn’t really matter what they think, they just seem to have different labels to the rest of us for the same things.
As the author of the phrase, I meant “just social constructs” to be an ontological statement.
Are you saying they are actually realists about germs and atoms, and are stating their position dishonetly? Do you think “is real” is just a label in some unimportant way?
Do you think “is real” is just a label in some unimportant way?
Maybe. I’m not entirely sure what your argument is. For instance, were the matrices of matrix mechanics quantum physics “real”? Were the waves of the wave formulation of QM “real”? The two formulations are equivalent, and it doesn’t seem useful to debate the reality of their individual idiosyncratic components this way.
Finding that difficult to process. Is the correspondence supposed to be some sort of coincidental, occasionalistic thing? But why shouldn’t a naturalist appeal to causation to ground symbols?
The causation would go as “beings with well grounded mental symbols are destroyed less often by the universe; beings with poorly grounded mental symbols are destroyed very often”.
That’s predictive accuracy. You can have predictive accuracy whilst badly misunderstanding the ontology of your perceived world. In fact,you van have predictive accuracy- doing the life—preserving thing in a given situation- without doing anything recogniseable as symbolic processing. And getting the ontology right is the more intuitive expansion of grounding a symbol, out of the options.
The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is “learning something” (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.
It’s always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge. And, I’d say, having the wrong model that gives the right predictions is just the same as having the right model with randomised labels. And since the labels are pretty meaningless anyway...
That seems to merely assert what I was arguing against .. I was arguing that preditictve accuracy is orthogonal to ontological correctness...and that grounding is to do with ontological correctness.
Right and wrong don’t have univocal meaning, here. A random model will have poor predictive accuracy, but you can still have two models of equivalent predictive accuracy, but different ontological implications.
You seem to be picturing a model as a graph with labelled vertices, and assuming that two equally good models most have the same structure. That is not so.
For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles … although it is false, in the sense of lacking ontological accuracy, since epicycles don’t exist.
Another way is to notice that ontological revolutions can make merely modest changes to predictive abilities. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two,,
In that case, there is still, a difference in empirical predictiveness. In the extreme case there is not: you can have two ontologies that always make the sane predictions, the one being dual to the other. An example is wave particle duality in quantum mechanics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and Simulated Reality. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams’s Razor is not empirical.
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect—and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. A further proxy can always be inserted into a series of proxies. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix inhabitant could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game, knowing its internal physics.The science of the Matrix inhabitants would work, in a sense, but the workability of their science would be limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality. But empiricism cannot tell us that we are not in the same situation.
In the words of Werner Heisenberg (Physics and Philosophy, 1958) “We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning”
We don’t seem to be disagreeing about anything factual. You just want grounding to be in “the fundamental ontology”, while I’m content with them being grounded in the set of everything we could observe. If you like, I’m using Occam or simplicity priors on ontologies; if there are real objects behind the ones we can observe but we never know about them, I’d still count our symbols as grounded. (that’s why I’d count virtual Napoleon’s symbols as being grounded in virtual Waterloo, incidentally)
Being relatively liberal about symbol grounding makes it easier to answer Searle, but harder to answer other people, such as people who think germs or atoms are just social constructs.
What predictions do they make when looking into microscopes or treating infectious diseases?
Exactly the sane....that is the point of predictive accuracy being orthogonal to ontological accuracy...you can vary the latter without affecting the firmer,
“just social constructs” is (almost always) not a purely ontological statement, though. And those who think that it’s a social construct, but that the predictions of germ theories are still accurate… well, it doesn’t really matter what they think, they just seem to have different labels to the rest of us for the same things.
As the author of the phrase, I meant “just social constructs” to be an ontological statement.
Are you saying they are actually realists about germs and atoms, and are stating their position dishonetly? Do you think “is real” is just a label in some unimportant way?
Maybe. I’m not entirely sure what your argument is. For instance, were the matrices of matrix mechanics quantum physics “real”? Were the waves of the wave formulation of QM “real”? The two formulations are equivalent, and it doesn’t seem useful to debate the reality of their individual idiosyncratic components this way.