For a few years now, I’ve been working on a project to build an artificial language. I strongly suspect that the future of the kind of communication that goes on here will belong to an artificial language. English didn’t evolve for people like us. For our purpose, it’s a cumbersome piece of shit, rife with a bunch of fallacies built directly into its engine. And I assume it’s the same way with all the other ones. For us, they’re sick to the core.
But I should stress that I don’t think the future will belong to any kind of word language. English is a word language, Lojban is a word language, etc. Or at least I don’t think the whole future will belong to one. We must free ourselves from the word paradigm. When somebody says “language”, most people think words. But why? Why not think pictures? Why not diagrams? I think there’s a lot of potential in the idea of building a visual language. An artificial visual language. That’s one thing I’m working on.
Anyway, for the sake of your rationality, there’s a lot at stake here. A bad language doesn’t just fail to properly communicate to other people; it systematically corrupts its user. How often do you pick up where you left off in a thought process by remembering a bunch of words? All day every day? Maybe your motto is to work to “improve your rationality”? Perhaps you write down your thoughts so you can remember them later? And so on. It’s not just other people who can misinterpret what you say; it’s also your future self who can misinterpret what you present self says. That’s how everybody comes to believe such crazy stuff. Their later selves systematically misinterpret their sooner selves. They believe what they hear, but they hear not what they meant to say.
For a few years now, I’ve been working on a project to build an artificial language.
I don’t want to sound disrespectful towards your efforts, but to be blunt, artificial languages intended for communication between people are a complete waste of time. The reason is that human language ability is based on highly specialized hardware with a huge number of peculiarities and constraints. There is a very large space for variability within those, of course, as is evident from the great differences between languages, but any language that satisfies them has roughly the same level of “problematic” features, such as irregular and complicated grammar, semantic ambiguities, literal meaning superseded by pragmatics in complicated and seemingly arbitrary ways, etc., etc.
Now, another critical property of human languages is that they change with time. Usually this change is very slow, but if people are forced to communicate in a language that violates the natural language constraints in some way, that language will quickly and spontaneously change into a new natural language that fits them. This is why attempts to communicate in regularized artificial languages are doomed, because a spontaneous, unconscious, and irresistible process will soon turn the regular artificial language into a messy natural one.
Of course, it does make sense to devise artificial languages for communication between humans and non-human entities, as evidenced by computer programming languages or standardized dog commands. However, as long as they have the same brain hardware, humans are stuck with the same old natural languages for talking to each other.
I don’t want to sound disrespectful towards your efforts, but to be blunt, artificial languages intended for communication between people are a complete waste of time.
A word language constructed from scratch based purely on what the creator thinks superior would indeed fall prey to your criticisms, but there a third possibility between a totally natural and totally artificial language. For lack of a better term, I’ll call it a cultivated language. That is, a language built up out of real efforts to communicate for practical purposes, but with deliberate constraints imposed by the medium.
When language first formed, humans could mostly only communicate in a linear way, the linearity of communication using mouths and ears being the bottleneck. The introduction of writing systems could eventually have fixed this (through a visual non-linear language like saizai’s), if not for inertia, as well as the fact that most non-intellectual people would be less interested in learning a language that had no carryover to speech.
But now we have the technology for a project that would place constraints on how people could communicate and just see what happens. In particular, if people could only communicate in 2D diagrams on a website designed for this language cultivation project, they might end up with something like saizai is trying to design, except it would be spontaneous.
And if there is any merit in Ian Ryan’s arguments for a constructed language above, those insights could be incorporated into the constraints on the users to see how they play out. That seems to be the best of both worlds: a sort of guided evolution.
If you specify in more detail which parts of what I wrote you dispute, I can provide a more detailed argument.
As the simplest and most succinct argument against artificial languages with allegedly superior properties, I would make the observation that human languages change with time and ask: what makes you think that your artificial language won’t also undergo change, or that the change will not be such that it destroys these superior properties?
If you build an artificial word language, you could make it in such a way that it would drive its own evolution in a useful way. A few examples:
If you make a rule available to derive a word easily, it would be less likely that the user would coin a new one.
If you also build a few other languages with a similar sound structure, you could make it super easy to coin new words without messing up the sound system.
If you make the sound system flow well enough, it would be unlikely that anybody would truncate the words to make it easier to pronounce or whatever.
I don’t understand how you could dismiss it out of hand that you could build a language that wouldn’t lose its superior qualities. There are a ton of different ways to make the engine defend itself in that regard. People mess with the English sound system only to make it flow better, and there’s no reason why you couldn’t just make an artificial language which already flows well enough.
Also, I’m not gonna try to convert the masses to my artificial language. In normal life, we spend a lot of our time using English to try to do something other than get the other person to think the same thought. We try to impress people, we try to get people to get us a glass of water, etc. I’m not interested in building a language for that kind of communication. All I’m interested in is building a language for what we try to do here on LW: reproduce our thought process in the other person’s head.
But what that means is that the “wild” needn’t be so wild. If the only people who use the artificial language are 1,000 people like you and me, I don’t see why we couldn’t retain its superior structure. I don’t see why I would take a perfectly good syntax and start messing with it. It would be specialized for one purpose: reproducing one’s thoughts in another’s head, especially for deep philosophical issues. We would probably use English in a lot of our posts! We would probably use a mix of English and the artificial language.
My response (“how are you so sure of all that stuff”) probably wasn’t very constructive, so I apologize. Perhaps I should have asked for an example of an artificial language that transformed into an irregular natural one. Since you probably would have mentioned Esperanto, I’ll respond to that. Basically, Esperanto was a partially regularized mix and match of a bunch of different natural language components. I have no interest in building a language like that.
Languages like Esperanto are still in the “natural language paradigm”; they’re basically just like idealized natural languages. But I have a different idea. If I build an artificial word language, its syntax won’t resemble any natural language that you’ve seen. At least not in that way. Actually, it would probably be more to the point to simply say that Esperanto was built for a much different reason. It’s a mix and match of a bunch of natural language components, and people use it like they use a natural language. It’s not surprising that it lost some of its regularity.
I’m getting pretty messy in this post, but I simply don’t have a concise response to this topic. Everywhere I go, people seem to have that same idea about artificial language. They say that we’re built for natural language, and either artificial language is impossible, or it would transform into natural language. I really just don’t know where people get that idea. How could we conceive of and build an artificial language, but at the same time be incapable of using it? That seems like a totally bizarre idea. Maybe I don’t understand it or something.
If you plan to construct a language akin to programming languages or mathematical formulas, i.e. one that is fully specified by a formal grammar and requires slow and painstaking effort for humans to write or decode, then yes, clearly you can freeze it as an unchangeable standard. (Though of course, devising such a language that is capable of expressing something more general is a Herculean task, which I frankly don’t consider feasible given the present state of knowledge.)
On the other hand, if you’re constructing a language that will be spoken by humans fluently and easily, there is no way you can prevent it from changing in all sorts of unpredictable ways. For example, you write:
People mess with the English sound system only to make it flow better, and there’s no reason why you couldn’t just make an artificial language which already flows well enough.
However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don’t you think that some of them would have already hit it by now? The exact mechanisms of phonetic change are still unclear, but a whole mountain of evidence indicates that it’s an inevitable process. Similar could be said about syntax, and pretty much any other aspect of grammar.
Look at it this way: the fundamental question is whether your artificial language will use the capabilities of the human natural language hardware. If yes, then it will have to change to be compatible with this hardware, and will subsequently share all the essential properties of natural languages (which are by definition those that are compatible with this hardware, and whose subset happens to be spoken around the world). If not, then you’ll get a formalism that must be handled by the general computational circuits in the human brain, which means that its use will be very slow, difficult, and error-prone for humans, just like with programming languages and math formulas.
However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don’t you think that some of them would have already hit it by now?
No, I don’t. Evolution is always a hack of what came before it, whereas scrapping the whole thing and starting from scratch doesn’t suffer from that problem. I don’t need to hack an existing structure; I can build exactly what I want right now.
Here’s an excellent example of this general point: Self-segregating morphology. That’s the language construction term for a sound system where the divisions between all the components (sentences, prefixes, roots, suffixes, and so on) are immediately obvious and unambiguous. Without understanding anything about the speech, you know the syntactical structure.
That’s a pretty cool feature, right? It’s easy to build that into an artificial language, and it certainly makes everything easier. It would be an important part of having a stable sound system. The words wouldn’t interfere with each other, because they would be unambiguously started and terminated within a sound system where the end of every word can run smoothly against the start of any other word. If I were trying to make a stable sound system, the first thing that I would do is make the morphology self-segregating.
But if a self-segregating morphology is such a happy point, why hasn’t any natural language come to that point? Well, that should be pretty obvious. No hack could transform a whole language into a having a self-segregating morphology. Or at least I don’t know of such a hack. But even then, it’s trivially easy to make one if you start from scratch! Don’t you accept the idea that some things are easier to design than evolve (because perhaps the hacking process doesn’t have an obvious way to be useful throughout every step to get to the specific endpoint)?
The exact mechanisms of phonetic change are still unclear, but a whole mountain of evidence indicates that it’s an inevitable process.
That whole mountain of evidence concerns natural languages with irregular sound systems. A self-segregating morphology that flows super well would be a whole different animal.
Look at it this way: the fundamental question is whether your artificial language will use the capabilities of the human natural language hardware. If yes, then it will have to change to be compatible with this hardware, and will subsequently share all the essential properties of natural languages (which are by definition those that are compatible with this hardware, and whose subset happens to be spoken around the world). If not, then you’ll get a formalism that must be handled by the general computational circuits in the human brain, which means that its use will be very slow, difficult, and error-prone for humans, just like with programming languages and math formulas.
Per my points above, I still don’t see why using the capabilities of the natural language hardware would lead to it changing in all sorts of unpredictable ways, especially if it’s not used for anything but trying to reproduce your thought in their head, and if it’s not used by anybody but a specific group of people with a specific purpose in mind. I still imagine an engine well-built to drive its own evolution in a useful way, and avoid becoming an irregular mess.
Self-segregating morphology. That’s the language construction term for a sound system where the divisions between all the components (sentences, prefixes, roots, suffixes, and so on) are immediately obvious and unambiguous. Without understanding anything about the speech, you know the syntactical structure.
Only until phonological changes, morphological erosion, cliticisation, and sundry other processes take place. And whether and how those processes happen isn’t related to how well the phonology flows, either, as far as I can tell.
The flow thing was just an example. The point was simply to illustrate that we shouldn’t reject out of hand the idea that an ordinary artificial language (as opposed to mathematical notation or something) could retain its regularity.
The point is simply that the evolution of the language directly depends on how it starts, which means that you could design in such a way that it drives its evolution in a useful way. Just because it would evolve doesn’t mean that it would lose its regularity. The flow thing is just one example of many. If it flows well, that’s simply one thing to not have to worry about.
That whole mountain of evidence concerns natural languages with irregular sound systems. A self-segregating morphology that flows super well would be a whole different animal.
How do you know that? To support this claim, you need a model that predicts the actually occurring sound changes in natural languages, and also that sound changes would not occur in a language with self-segregating morphology. Do you have such a model? If you do, I’d be tremendously curious to see it.
Sorry, I should have said that it’s not necessarily the same animal. The whole mountain of evidence concerns natural languages, right? Do you have any evidence that an artificial language with a self-segregating morphology and a simple sound structure would also go through the same changes?
So I’m not necessarily saying that the changes wouldn’t occur; I’m simply saying that we can’t reject out of hand the idea that we could build a system where they won’t occur, or at least build a system where they would occur in a useful way (rather than a way that would destroy its superior qualities). Where the system starts would determine its evolution; I see no reason why you couldn’t control that variable in such a way that it would be a stable system.
For a few years now, I’ve been working on a project to build an artificial language. I strongly suspect that the future of the kind of communication that goes on here will belong to an artificial language. English didn’t evolve for people like us. For our purpose, it’s a cumbersome piece of shit, rife with a bunch of fallacies built directly into its engine. And I assume it’s the same way with all the other ones. For us, they’re sick to the core.
But I should stress that I don’t think the future will belong to any kind of word language. English is a word language, Lojban is a word language, etc. Or at least I don’t think the whole future will belong to one. We must free ourselves from the word paradigm. When somebody says “language”, most people think words. But why? Why not think pictures? Why not diagrams? I think there’s a lot of potential in the idea of building a visual language. An artificial visual language. That’s one thing I’m working on.
Anyway, for the sake of your rationality, there’s a lot at stake here. A bad language doesn’t just fail to properly communicate to other people; it systematically corrupts its user. How often do you pick up where you left off in a thought process by remembering a bunch of words? All day every day? Maybe your motto is to work to “improve your rationality”? Perhaps you write down your thoughts so you can remember them later? And so on. It’s not just other people who can misinterpret what you say; it’s also your future self who can misinterpret what you present self says. That’s how everybody comes to believe such crazy stuff. Their later selves systematically misinterpret their sooner selves. They believe what they hear, but they hear not what they meant to say.
I don’t want to sound disrespectful towards your efforts, but to be blunt, artificial languages intended for communication between people are a complete waste of time. The reason is that human language ability is based on highly specialized hardware with a huge number of peculiarities and constraints. There is a very large space for variability within those, of course, as is evident from the great differences between languages, but any language that satisfies them has roughly the same level of “problematic” features, such as irregular and complicated grammar, semantic ambiguities, literal meaning superseded by pragmatics in complicated and seemingly arbitrary ways, etc., etc.
Now, another critical property of human languages is that they change with time. Usually this change is very slow, but if people are forced to communicate in a language that violates the natural language constraints in some way, that language will quickly and spontaneously change into a new natural language that fits them. This is why attempts to communicate in regularized artificial languages are doomed, because a spontaneous, unconscious, and irresistible process will soon turn the regular artificial language into a messy natural one.
Of course, it does make sense to devise artificial languages for communication between humans and non-human entities, as evidenced by computer programming languages or standardized dog commands. However, as long as they have the same brain hardware, humans are stuck with the same old natural languages for talking to each other.
A word language constructed from scratch based purely on what the creator thinks superior would indeed fall prey to your criticisms, but there a third possibility between a totally natural and totally artificial language. For lack of a better term, I’ll call it a cultivated language. That is, a language built up out of real efforts to communicate for practical purposes, but with deliberate constraints imposed by the medium.
When language first formed, humans could mostly only communicate in a linear way, the linearity of communication using mouths and ears being the bottleneck. The introduction of writing systems could eventually have fixed this (through a visual non-linear language like saizai’s), if not for inertia, as well as the fact that most non-intellectual people would be less interested in learning a language that had no carryover to speech.
But now we have the technology for a project that would place constraints on how people could communicate and just see what happens. In particular, if people could only communicate in 2D diagrams on a website designed for this language cultivation project, they might end up with something like saizai is trying to design, except it would be spontaneous.
And if there is any merit in Ian Ryan’s arguments for a constructed language above, those insights could be incorporated into the constraints on the users to see how they play out. That seems to be the best of both worlds: a sort of guided evolution.
How are you so sure of all that stuff?
If you specify in more detail which parts of what I wrote you dispute, I can provide a more detailed argument.
As the simplest and most succinct argument against artificial languages with allegedly superior properties, I would make the observation that human languages change with time and ask: what makes you think that your artificial language won’t also undergo change, or that the change will not be such that it destroys these superior properties?
If you build an artificial word language, you could make it in such a way that it would drive its own evolution in a useful way. A few examples:
If you make a rule available to derive a word easily, it would be less likely that the user would coin a new one.
If you also build a few other languages with a similar sound structure, you could make it super easy to coin new words without messing up the sound system.
If you make the sound system flow well enough, it would be unlikely that anybody would truncate the words to make it easier to pronounce or whatever.
I don’t understand how you could dismiss it out of hand that you could build a language that wouldn’t lose its superior qualities. There are a ton of different ways to make the engine defend itself in that regard. People mess with the English sound system only to make it flow better, and there’s no reason why you couldn’t just make an artificial language which already flows well enough.
Also, I’m not gonna try to convert the masses to my artificial language. In normal life, we spend a lot of our time using English to try to do something other than get the other person to think the same thought. We try to impress people, we try to get people to get us a glass of water, etc. I’m not interested in building a language for that kind of communication. All I’m interested in is building a language for what we try to do here on LW: reproduce our thought process in the other person’s head.
But what that means is that the “wild” needn’t be so wild. If the only people who use the artificial language are 1,000 people like you and me, I don’t see why we couldn’t retain its superior structure. I don’t see why I would take a perfectly good syntax and start messing with it. It would be specialized for one purpose: reproducing one’s thoughts in another’s head, especially for deep philosophical issues. We would probably use English in a lot of our posts! We would probably use a mix of English and the artificial language.
My response (“how are you so sure of all that stuff”) probably wasn’t very constructive, so I apologize. Perhaps I should have asked for an example of an artificial language that transformed into an irregular natural one. Since you probably would have mentioned Esperanto, I’ll respond to that. Basically, Esperanto was a partially regularized mix and match of a bunch of different natural language components. I have no interest in building a language like that.
Languages like Esperanto are still in the “natural language paradigm”; they’re basically just like idealized natural languages. But I have a different idea. If I build an artificial word language, its syntax won’t resemble any natural language that you’ve seen. At least not in that way. Actually, it would probably be more to the point to simply say that Esperanto was built for a much different reason. It’s a mix and match of a bunch of natural language components, and people use it like they use a natural language. It’s not surprising that it lost some of its regularity.
I’m getting pretty messy in this post, but I simply don’t have a concise response to this topic. Everywhere I go, people seem to have that same idea about artificial language. They say that we’re built for natural language, and either artificial language is impossible, or it would transform into natural language. I really just don’t know where people get that idea. How could we conceive of and build an artificial language, but at the same time be incapable of using it? That seems like a totally bizarre idea. Maybe I don’t understand it or something.
If you plan to construct a language akin to programming languages or mathematical formulas, i.e. one that is fully specified by a formal grammar and requires slow and painstaking effort for humans to write or decode, then yes, clearly you can freeze it as an unchangeable standard. (Though of course, devising such a language that is capable of expressing something more general is a Herculean task, which I frankly don’t consider feasible given the present state of knowledge.)
On the other hand, if you’re constructing a language that will be spoken by humans fluently and easily, there is no way you can prevent it from changing in all sorts of unpredictable ways. For example, you write:
However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don’t you think that some of them would have already hit it by now? The exact mechanisms of phonetic change are still unclear, but a whole mountain of evidence indicates that it’s an inevitable process. Similar could be said about syntax, and pretty much any other aspect of grammar.
Look at it this way: the fundamental question is whether your artificial language will use the capabilities of the human natural language hardware. If yes, then it will have to change to be compatible with this hardware, and will subsequently share all the essential properties of natural languages (which are by definition those that are compatible with this hardware, and whose subset happens to be spoken around the world). If not, then you’ll get a formalism that must be handled by the general computational circuits in the human brain, which means that its use will be very slow, difficult, and error-prone for humans, just like with programming languages and math formulas.
No, I don’t. Evolution is always a hack of what came before it, whereas scrapping the whole thing and starting from scratch doesn’t suffer from that problem. I don’t need to hack an existing structure; I can build exactly what I want right now.
Here’s an excellent example of this general point: Self-segregating morphology. That’s the language construction term for a sound system where the divisions between all the components (sentences, prefixes, roots, suffixes, and so on) are immediately obvious and unambiguous. Without understanding anything about the speech, you know the syntactical structure.
That’s a pretty cool feature, right? It’s easy to build that into an artificial language, and it certainly makes everything easier. It would be an important part of having a stable sound system. The words wouldn’t interfere with each other, because they would be unambiguously started and terminated within a sound system where the end of every word can run smoothly against the start of any other word. If I were trying to make a stable sound system, the first thing that I would do is make the morphology self-segregating.
But if a self-segregating morphology is such a happy point, why hasn’t any natural language come to that point? Well, that should be pretty obvious. No hack could transform a whole language into a having a self-segregating morphology. Or at least I don’t know of such a hack. But even then, it’s trivially easy to make one if you start from scratch! Don’t you accept the idea that some things are easier to design than evolve (because perhaps the hacking process doesn’t have an obvious way to be useful throughout every step to get to the specific endpoint)?
That whole mountain of evidence concerns natural languages with irregular sound systems. A self-segregating morphology that flows super well would be a whole different animal.
Per my points above, I still don’t see why using the capabilities of the natural language hardware would lead to it changing in all sorts of unpredictable ways, especially if it’s not used for anything but trying to reproduce your thought in their head, and if it’s not used by anybody but a specific group of people with a specific purpose in mind. I still imagine an engine well-built to drive its own evolution in a useful way, and avoid becoming an irregular mess.
Only until phonological changes, morphological erosion, cliticisation, and sundry other processes take place. And whether and how those processes happen isn’t related to how well the phonology flows, either, as far as I can tell.
The flow thing was just an example. The point was simply to illustrate that we shouldn’t reject out of hand the idea that an ordinary artificial language (as opposed to mathematical notation or something) could retain its regularity.
The point is simply that the evolution of the language directly depends on how it starts, which means that you could design in such a way that it drives its evolution in a useful way. Just because it would evolve doesn’t mean that it would lose its regularity. The flow thing is just one example of many. If it flows well, that’s simply one thing to not have to worry about.
How do you know that? To support this claim, you need a model that predicts the actually occurring sound changes in natural languages, and also that sound changes would not occur in a language with self-segregating morphology. Do you have such a model? If you do, I’d be tremendously curious to see it.
Sorry, I should have said that it’s not necessarily the same animal. The whole mountain of evidence concerns natural languages, right? Do you have any evidence that an artificial language with a self-segregating morphology and a simple sound structure would also go through the same changes?
So I’m not necessarily saying that the changes wouldn’t occur; I’m simply saying that we can’t reject out of hand the idea that we could build a system where they won’t occur, or at least build a system where they would occur in a useful way (rather than a way that would destroy its superior qualities). Where the system starts would determine its evolution; I see no reason why you couldn’t control that variable in such a way that it would be a stable system.
Is it by any chance a nonlinear fully two-dimensional writing system?
Thanks for the link. Yeah, that’s one of the ideas. It’s still in its infancy though, so I don’t have anything to show off.