I disagree that there is such a thing as objective “centrality”, just as I disagree there is such a thing as objective definitions. All language is made-up, and it’s only useful to the extent others share your (arbitrarily designated) meaning or boundaries. There are scores of real-life examples that clearly illustrate this, such as the fact that the word for ‘sake’ refers to all alcoholic drinks in Japanese, or how some languages make a distinction between maternal/paternal aunt/uncle, or how Russian treats light blue and dark blue as separate colors, etc.
Even setting that aside, the only insight you glean from determining whether a member is central to a category or not is...whether a member is central to a category or not. If you use category membership itself to glean any other information about a member, this is exactly the sticker shortcut fallacy I’m describing.
Statutory and other legal interpretation is exempt from my critique here, because the meaning of a word is very often explicitly spelled out in legislation (hence why legalese is so tedious to read). When the meaning is ambiguous, judges resort to specific canons of interpretations (such as legislative intent, ordinary meaning, historical meaning, rule of lenity, etc.) that are based in legal precedent.
In the space of all possible languages and words, sure, but real languages are (or at least always have been to date on Earth) created and used by humans, with human minds/tendencies. Despite massive cultural/historical/individual variation, that still moves us quite a ways away from truly, fully arbitrary word meanings. I see two counterpoints to your observation of how different languages split and lump concepts:
1) Languages tend to cluster in the ways they do or don’t split. Number of basic color words varies but the order in which they appear as number increases is mostly constant. Number of relationship words varies but as it increases the splitting happens along the axes of gender and degree-of-relatedness and which line we’re related through. Most times a language has a hard-to-translate word, it can still be translated as a compound word, and if not, it can usually be roughly defined in a sentence or two, and work as a loanword from then on.
2) Languages tend to enable new word formation by native speakers with relatively easy agreement on their meaning. English has (had?) no single word meaning “niece and nephew” but many people hear or read the word “nibling” for the first time and figure out the intended meaning with no explanation whatsoever. There’s also things like the bouba/kiki effect, which points towards how some of the ways sound ties to meaning are related to synesthesia and metaphor. On the other hand jargon (in any field including law) - where words are actually defined by inhuman category boundaries feels much less natural to most people. From an objective perspective the jargon words are least arbitrary. However, the ease of learning natural language words suggests they’re not so much arbitrary as they are running along grooves natural to human thought instead of natural to the physical, non-human world.
I disagree that there is such a thing as objective “centrality”, just as I disagree there is such a thing as objective definitions. All language is made-up, and it’s only useful to the extent others share your (arbitrarily designated) meaning or boundaries. There are scores of real-life examples that clearly illustrate this, such as the fact that the word for ‘sake’ refers to all alcoholic drinks in Japanese, or how some languages make a distinction between maternal/paternal aunt/uncle, or how Russian treats light blue and dark blue as separate colors, etc.
Even setting that aside, the only insight you glean from determining whether a member is central to a category or not is...whether a member is central to a category or not. If you use category membership itself to glean any other information about a member, this is exactly the sticker shortcut fallacy I’m describing.
Statutory and other legal interpretation is exempt from my critique here, because the meaning of a word is very often explicitly spelled out in legislation (hence why legalese is so tedious to read). When the meaning is ambiguous, judges resort to specific canons of interpretations (such as legislative intent, ordinary meaning, historical meaning, rule of lenity, etc.) that are based in legal precedent.
In the space of all possible languages and words, sure, but real languages are (or at least always have been to date on Earth) created and used by humans, with human minds/tendencies. Despite massive cultural/historical/individual variation, that still moves us quite a ways away from truly, fully arbitrary word meanings. I see two counterpoints to your observation of how different languages split and lump concepts:
1) Languages tend to cluster in the ways they do or don’t split. Number of basic color words varies but the order in which they appear as number increases is mostly constant. Number of relationship words varies but as it increases the splitting happens along the axes of gender and degree-of-relatedness and which line we’re related through. Most times a language has a hard-to-translate word, it can still be translated as a compound word, and if not, it can usually be roughly defined in a sentence or two, and work as a loanword from then on.
2) Languages tend to enable new word formation by native speakers with relatively easy agreement on their meaning. English has (had?) no single word meaning “niece and nephew” but many people hear or read the word “nibling” for the first time and figure out the intended meaning with no explanation whatsoever. There’s also things like the bouba/kiki effect, which points towards how some of the ways sound ties to meaning are related to synesthesia and metaphor. On the other hand jargon (in any field including law) - where words are actually defined by inhuman category boundaries feels much less natural to most people. From an objective perspective the jargon words are least arbitrary. However, the ease of learning natural language words suggests they’re not so much arbitrary as they are running along grooves natural to human thought instead of natural to the physical, non-human world.