I am not well-read on this topic (or at-all read, really), but it struck me as bizarre that a post about epistemology would begin by discussing natural language. This seems to me like trying to grasp the most fundamental laws of physics by first observing the immune systems of birds and the turbulence around their wings.
The relationship between natural language and epistemology is more anthropological* that it is information-theoretical. It is possible to construct models that accurately represent features of the cosmos without making use of any language at all, and as you encounter in the “fuzzy logic” concept, human dependence on natural language is often an impediment to gaining accurate information.
Of course, natural language grants us many efficiencies that make it extremely useful in ancestral human contexts (as well as most modern ones). And given that we are humans, to perform error correction on our models, we have to model our own minds and the process of examination and modelling itself as part of the overall system we are examining and modelling. But the goal of that recursive modelling is to reduce the noise and error caused by the fuzziness of natural language and other human-specific* limitations, so that we can make accurate and specific predictions about stuff.
*The rise of AI language models means natural language is no longer a purely human phenomenon. It also had the side-effect of solving the symbol grounding problem by constructing accurate representations of natural language using giant vectors that map inputs to abstract concepts, map abstract concepts to each other, and map all of that to testable outputs. This seems to be congruent with what humans do, as well. Here again, formalization and precise measurement in order to discover the actual binary truth values that really do exist in the environment is significantly more useful than accepting the limitations of fuzziness.
I am not well-read on this topic (or at-all read, really), but it struck me as bizarre that a post about epistemology would begin by discussing natural language. This seems to me like trying to grasp the most fundamental laws of physics by first observing the immune systems of birds and the turbulence around their wings.
The relationship between natural language and epistemology is more anthropological* that it is information-theoretical. It is possible to construct models that accurately represent features of the cosmos without making use of any language at all, and as you encounter in the “fuzzy logic” concept, human dependence on natural language is often an impediment to gaining accurate information.
Of course, natural language grants us many efficiencies that make it extremely useful in ancestral human contexts (as well as most modern ones). And given that we are humans, to perform error correction on our models, we have to model our own minds and the process of examination and modelling itself as part of the overall system we are examining and modelling. But the goal of that recursive modelling is to reduce the noise and error caused by the fuzziness of natural language and other human-specific* limitations, so that we can make accurate and specific predictions about stuff.
*The rise of AI language models means natural language is no longer a purely human phenomenon. It also had the side-effect of solving the symbol grounding problem by constructing accurate representations of natural language using giant vectors that map inputs to abstract concepts, map abstract concepts to each other, and map all of that to testable outputs. This seems to be congruent with what humans do, as well. Here again, formalization and precise measurement in order to discover the actual binary truth values that really do exist in the environment is significantly more useful than accepting the limitations of fuzziness.