There’s certainly a lot of complexity being glossed over, but I think it’s manageable. Natural languages borrow words from each other all the time, and while there are issues and ambiguities with how to do it, they develop rules that seem to cover them—forbbidden phonemes and clusters get replaced in predictable ways, affixes get stripped off and replaced with local versions, and the really hard cases like highly-irregular verbs, prepositions and articles form closed sets, so they don’t need to be borrowable.
If I’m translating a math paper from English to an artificial language, and the author makes up a new concept and calls it blarghlizability, I should be able to find a unique, non-conflicting and invertible translation by replacing the -izability affix and leaving the rest the same or applying simple phonetic transforms on it. More importantly, this translation process should determine most of the language’s vocabulary. It’s the difference between a language that has O(n) things to memorize and a language that has O(1) things to memorize.
(EDIT: Deleted a half-finished sentence that I’ll finish in a separate reply downthread)
Yes, that’s true about natural language borrowing, to some extent. Note that calques (borrowing of a phrase with the vocabulary translated but the syntactic structure of the source language retained) are also common; presumably the artificial language would want to avoid these.
Also, some very high percentage of natural language borrowings are nouns. This clearly has a lot to do with the fact that if you encounter a new object, a natural way to label it is to adopt the existing term of the people who’ve already encountered it, but I think there are other factors: I reckon it would be fair to say that nouns are syntactically less complex than verbs. I suspect you’d encounter least trouble importing concrete nouns into your artificial language.
It’s interesting that you talk about the phonetics of the artificial language; I realised that I’ve been imagining something entirely text-based, but of course there’s no reason it should be (except for the practical/technical difficulties with processing acoustic input, I guess).
I’m curious what got missed off at the end of your post?
Oops, I want back to edit and forgot to write the rest of that paragraph.
I was going to say, supporting borrowing means you need to retain all the borrowable word forms—nouns, adjectives, and verbs—which rules out some extra-radical possibilities, like making a language where verbs are a closed set and actions are represented by nouns. But to my knowledge no natural languages do that, so that’s not much of a restriction.
I think that most of the potential lies in the “extra-radical possibilities”. The traditional linguistics descriptions (adjectives, nouns, prepositions, and so on) don’t seem to apply very well to any of my word languages. After all, they’re just a bunch of natural language components; they needn’t show up in an artificial language.
For example, in one of my word languages, there’s no distinction between nouns and adjectives (meaning that there aren’t any nouns and adjectives, I guess). To express the equivalent of the phrase “stupid man”, you simply put the word referring to the set of everything stupid, next to the one referring to the one of everything that’s a man, and put the word for set intersection in front of it. You get one of these two examples:
either: [set intersection] [set of everything stupid] [set of everything that’s a man]
or: [set intersection] [set of everything that’s a man] [set of everything stupid]
Of course that assumes that there’s no single word already referring to the intersection of those two sets, or that you just don’t want to use it, but whatever. I just meant to give it as an example.
I think that this system makes it more elegant, but it’s not a terribly big improvement. And it’s not very radical either. The more radical and useful stuff, I’m not ready to give an example of. This is just something simple. But it’s sufficient to say that you shouldn’t let the traditional descriptions constrain you. If you’re trying to make a better language, why limit yourself to just mixing and matching the old parts? There’s a world of opportunity out there, but you’re not gonna find much of it if you trap yourself in the “natural language paradigm”.
in one of my word languages, there’s no distinction between nouns and adjectives
There are Australian Aboriginal languages that work a lot like this, and in some ways go further. The equivalent of the sentence “Big is coming” would be perfectly grammatical in Dyirbal, with the big thing(s) to be determined through the surrounding context.
In some other languages, there’s little or no distinction between adjectives and verbs, so the English “this car is pink” would be translated to something more like “this car pinks”
Basically what I’m saying is that a large number of the more obvious “extra-radical possibilities” are already implemented in existing languages, albeit not in the overstudied languages of Europe.
By the way, in that word language, I simply have a group of 4 grammatical particles, each referring to 1 of the 4 set operations (union, intersection, complement, and symmetric difference). That simplifies a few of the systems that we find in English or whatever. For example, we don’t find intersection only in the relationship between a noun and an adjective; we also find it in a bunch of other places. Here’s a list of a bunch of examples of where we see one of the set operations in English:
There’s a deer over there, and he looks worried. (intersection)
He’s a master cook. (intersection between “master” and “cook”)
The stars are the suns and the planets. (union)
Either there’s a deer over there, or I’m going crazy. (symmetric difference)
Everybody here except Phil is an idiot. (complement)
Besides when I’m doing economics, I’m an academic idiot. (complement)
A lake-side or ocean-side view in addition to a comfortable house is really all I want out of life. (intersection)
A light bulb is either on or off. (symmetric difference)
It’s both a table and a chair. (intersection)
Rocks that aren’t jagged won’t work for this. (complement)
A traditional diet coupled with a routine of good exercise will keep you healthy. (intersection)
A rock or stone will do. (union)
I might be wrong about some of those, so look at them carefully. And I’m sure there are a bunch of other examples. Maybe I missed a lot of the really convoluted ones because of how confusing they are. Either way, the point is that there are a bunch of random examples of the set operations in English. I think simply having a group of 4 grammatical particles for them would make the system a lot simpler and perhaps easier to learn and use.
Are there any natural language that do anything like this? Sure, there are probably a lot of natural languages that don’t make the distinction between nouns and adjectives. That distinction is nearly useless in a SVO language. We even see English speakers “violate” the noun/adjective system a lot. For example, something like this: “Hand me one of the longs.” If you work someplace where you constantly have to distinguish between the long and short version of a tool, you’ll probably hear that a lot. But are there are any natural languages that use a group of grammatical particles in this way? Or at the very least use one of them consistently?
Note: Perhaps I’m being too hard on the noun/adjective system in English. It’s often useless, but it serves a purpose that keeps it around. Two nouns next to each other (e.g., “forest people”) signifies that there’s some relation between the two sets, whereas an adjective in front of a noun signifies that the relation is specifically intersection. That seems to be the only point of the system. Maybe I’m missing something?
Another note: I’m not an expert on set theory. Maybe I’m abusing some of these terms. If anybody thinks that’s the case, I would appreciate the help.
There’s certainly a lot of complexity being glossed over, but I think it’s manageable. Natural languages borrow words from each other all the time, and while there are issues and ambiguities with how to do it, they develop rules that seem to cover them—forbbidden phonemes and clusters get replaced in predictable ways, affixes get stripped off and replaced with local versions, and the really hard cases like highly-irregular verbs, prepositions and articles form closed sets, so they don’t need to be borrowable.
If I’m translating a math paper from English to an artificial language, and the author makes up a new concept and calls it blarghlizability, I should be able to find a unique, non-conflicting and invertible translation by replacing the -izability affix and leaving the rest the same or applying simple phonetic transforms on it. More importantly, this translation process should determine most of the language’s vocabulary. It’s the difference between a language that has O(n) things to memorize and a language that has O(1) things to memorize.
(EDIT: Deleted a half-finished sentence that I’ll finish in a separate reply downthread)
Yes, that’s true about natural language borrowing, to some extent. Note that calques (borrowing of a phrase with the vocabulary translated but the syntactic structure of the source language retained) are also common; presumably the artificial language would want to avoid these.
Also, some very high percentage of natural language borrowings are nouns. This clearly has a lot to do with the fact that if you encounter a new object, a natural way to label it is to adopt the existing term of the people who’ve already encountered it, but I think there are other factors: I reckon it would be fair to say that nouns are syntactically less complex than verbs. I suspect you’d encounter least trouble importing concrete nouns into your artificial language.
It’s interesting that you talk about the phonetics of the artificial language; I realised that I’ve been imagining something entirely text-based, but of course there’s no reason it should be (except for the practical/technical difficulties with processing acoustic input, I guess).
I’m curious what got missed off at the end of your post?
Oops, I want back to edit and forgot to write the rest of that paragraph.
I was going to say, supporting borrowing means you need to retain all the borrowable word forms—nouns, adjectives, and verbs—which rules out some extra-radical possibilities, like making a language where verbs are a closed set and actions are represented by nouns. But to my knowledge no natural languages do that, so that’s not much of a restriction.
I think that most of the potential lies in the “extra-radical possibilities”. The traditional linguistics descriptions (adjectives, nouns, prepositions, and so on) don’t seem to apply very well to any of my word languages. After all, they’re just a bunch of natural language components; they needn’t show up in an artificial language.
For example, in one of my word languages, there’s no distinction between nouns and adjectives (meaning that there aren’t any nouns and adjectives, I guess). To express the equivalent of the phrase “stupid man”, you simply put the word referring to the set of everything stupid, next to the one referring to the one of everything that’s a man, and put the word for set intersection in front of it. You get one of these two examples:
either: [set intersection] [set of everything stupid] [set of everything that’s a man]
or: [set intersection] [set of everything that’s a man] [set of everything stupid]
Of course that assumes that there’s no single word already referring to the intersection of those two sets, or that you just don’t want to use it, but whatever. I just meant to give it as an example.
I think that this system makes it more elegant, but it’s not a terribly big improvement. And it’s not very radical either. The more radical and useful stuff, I’m not ready to give an example of. This is just something simple. But it’s sufficient to say that you shouldn’t let the traditional descriptions constrain you. If you’re trying to make a better language, why limit yourself to just mixing and matching the old parts? There’s a world of opportunity out there, but you’re not gonna find much of it if you trap yourself in the “natural language paradigm”.
There are Australian Aboriginal languages that work a lot like this, and in some ways go further. The equivalent of the sentence “Big is coming” would be perfectly grammatical in Dyirbal, with the big thing(s) to be determined through the surrounding context. In some other languages, there’s little or no distinction between adjectives and verbs, so the English “this car is pink” would be translated to something more like “this car pinks”
Basically what I’m saying is that a large number of the more obvious “extra-radical possibilities” are already implemented in existing languages, albeit not in the overstudied languages of Europe.
By the way, in that word language, I simply have a group of 4 grammatical particles, each referring to 1 of the 4 set operations (union, intersection, complement, and symmetric difference). That simplifies a few of the systems that we find in English or whatever. For example, we don’t find intersection only in the relationship between a noun and an adjective; we also find it in a bunch of other places. Here’s a list of a bunch of examples of where we see one of the set operations in English:
There’s a deer over there, and he looks worried. (intersection)
He’s a master cook. (intersection between “master” and “cook”)
The stars are the suns and the planets. (union)
Either there’s a deer over there, or I’m going crazy. (symmetric difference)
Everybody here except Phil is an idiot. (complement)
Besides when I’m doing economics, I’m an academic idiot. (complement)
A lake-side or ocean-side view in addition to a comfortable house is really all I want out of life. (intersection)
A light bulb is either on or off. (symmetric difference)
It’s both a table and a chair. (intersection)
Rocks that aren’t jagged won’t work for this. (complement)
A traditional diet coupled with a routine of good exercise will keep you healthy. (intersection)
A rock or stone will do. (union)
I might be wrong about some of those, so look at them carefully. And I’m sure there are a bunch of other examples. Maybe I missed a lot of the really convoluted ones because of how confusing they are. Either way, the point is that there are a bunch of random examples of the set operations in English. I think simply having a group of 4 grammatical particles for them would make the system a lot simpler and perhaps easier to learn and use.
Are there any natural language that do anything like this? Sure, there are probably a lot of natural languages that don’t make the distinction between nouns and adjectives. That distinction is nearly useless in a SVO language. We even see English speakers “violate” the noun/adjective system a lot. For example, something like this: “Hand me one of the longs.” If you work someplace where you constantly have to distinguish between the long and short version of a tool, you’ll probably hear that a lot. But are there are any natural languages that use a group of grammatical particles in this way? Or at the very least use one of them consistently?
Note: Perhaps I’m being too hard on the noun/adjective system in English. It’s often useless, but it serves a purpose that keeps it around. Two nouns next to each other (e.g., “forest people”) signifies that there’s some relation between the two sets, whereas an adjective in front of a noun signifies that the relation is specifically intersection. That seems to be the only point of the system. Maybe I’m missing something?
Another note: I’m not an expert on set theory. Maybe I’m abusing some of these terms. If anybody thinks that’s the case, I would appreciate the help.