It really bothers me when I see proposals to ‘fix’ language because as far as I’m concerned, natural languages are well-adapted to their environment.
The purpose of language, insofar as it has a specific purpose, is to get other people to do things. To get you to think of a concept the same way I do, to make you feel a specific emotion, to induce you to make me a sandwich, or whatever.
People’s brains don’t operate using strict logic. We’re extremely good at pattern matching from noisy and ambiguous data, in a way that programs have yet to approach. eg. Google’s probabilistic search correction does well at guessing what you meant to type when you produce an ambiguous or malformed string, but it can’t infer that since all your searches in the last few minutes were all clustered around the topic of clinical psychology, your current search term of “hysterical” is probably meant to refer to the old psychiatric concept and not the modern usage of hysterical = funny. A human would have much less trouble working that out because they have a mental model of the current conversation that indicates that the technical definition of the word is relevant.
This is why it’s not only ok, but in fact good for language to have a lot of built-in ambiguity—given the hardware that it runs on, it’s much more efficient for me to rattle off an ambiguous sentence whose meaning is made clear through context, than it is for me to construct a sentence which is redundantly unambiguous due to our shared environment. Furthermore, my communicating in a sentence which is redundantly unambiguous carries the connotation that I have a low opinion of your ability to understand me, otherwise why would I put myself out so much to encode my meaning?
Lojban isn’t nearly as unambiguous and logical as its creators wanted it to be. While it’s true that its syntax is computer-readable, there is little to no improvement on the semantic level. And the pragmatic level of language is completely language-independent—there is always going to be a way to be unnecessarily vague, to be sarcastic, to misdirect, and so on, because those devices allow us to communicate nuances about our attitudes towards the subject of conversation and signal various personal traits in a way that straightforward communication doesn’t support. So despite Lojban having the capacity to be clearer than most natural language, that’s not how it will be used conversationally by any fluent speakers. And the same goes for any future constructed languages.
Taboos and neologisms don’t work in the long-term, because human language evolves over time. Consider a few topics that we currently have taboos about: sex and defecation, and how many euphemisms we have to talk about them. The reason we have so many is that as each euphemism reaches peak usage, its meaning becomes too closely connected to the concept it was meant to stand in for. It’s then replaced by a new untainted euphemism and the process repeats. Similarly, neologisms, once released into the wild, will take on new connotations or change meaning altogether.
And of course, if humans actively use some language that’s very different from natural languages in any important respect, it will soon get creolized until it looks like just another ordinary human language.
This is what happened to e.g. Esperanto: it was supposed to be extraordinarily simple and regular, but once it caught up with a real community of speakers, it underwent a rapid evolution towards a natural language compatible with the human brain hardware, and became just as messy and complicated as any other. (Esperantists still advertise their language as supposedly specified by a few simple rules, but grammar books of real fluent Esperanto are already phone book-thick, and probably nowhere near complete.)
This contains a kernel of truth, but is also highly misleading in some important respects. Esperanto is extraordinarily simple and regular; the famous Sixteen Rules, while obviously not a complete description of the grammar of the language, still hold today as much as they did in 1887. To an uninformed reader, your comment may imply that Esperanto has perhaps since then evolved the same kind of morphological irregularities that we find in “natural” languages, but this isn’t the case. There are no irregular inflections (e.g. verb conjugations or noun declensions), and the regular ones are simple indeed by comparison with many other languages. This significantly cuts down on the amount of rote memorization required to attain a working command of the language; and this is without mentioning the freedom in word-building that is allowed by the system of compounds and affixes.
What is true is that there are many linguistic features of Esperanto that aren’t systematically standardized. But these are largely the kinds of features that only linguists tend to think about explicitly; L.L. Zamenhof, the creator of Esperanto, was a 19th-century oculist and amateur philologist, not a 20th-century academic linguist. As a result, he simply didn’t think to invent things like a systematic phonology or register conventions for Esperanto; and so these things have been developed by speakers of the language over time, in the way they naturally do among humans. The thick grammar books you speak of are no doubt descriptions of such features. But these aren’t the kind of books people use to learn any language, Esperanto included; and if you compare actual pedagogical books on Esperanto to those on “natural” languages, you will find that they are simpler.
To an uninformed reader, your comment may imply that Esperanto has perhaps since then evolved the same kind of morphological irregularities that we find in “natural” languages, but this isn’t the case.
From my experience with learning several foreign languages, morphological irregularities look scary in the beginning, but they completely pale in comparison with the complexity and irregularity of syntax and semantics. There are many natural languages with very little morphological complexity, but these aren’t any easier to learn to speak like a native. (On the other hand, for example, Slavic languages have very complicated and irregular inflectional morphology, but you’ll learn to recite all the conjugations and declensions back and forth sooner than you’ll figure out how to choose between the verbal aspects even approximately right.)
The thick grammar books you speak of are no doubt descriptions of such features. But these aren’t the kind of books people use to learn any language, Esperanto included; and if you compare actual pedagogical books on Esperanto to those on “natural” languages, you will find that they are simpler.
However, the whole point is that in order to speak in a way that will sound natural and grammatical to fluent speakers, you have to internalize all those incredibly complicated points of syntax and semantics, which have developed naturally with time. Of course that nobody except linguists thinks about these rules explicitly, but fluent speakers judge instinctively whether a given utterance is grammatical based on them (and the linguist’s challenge is in fact to reverse-engineer these intuitions into explicit rules).
(Even when it comes to inflectional morphology, assuming a lively community of Esperanto speakers persists into the future, how long do you think it will take before common contractions start grammaticalizing into rudimentary irregular inflections?)
From my experience with learning several foreign languages, morphological irregularities look scary in the beginning, but they completely pale in comparison with the complexity and irregularity of syntax and semantics.
I agree. However, making something look less scary in the beginning still constitutes an improvement from a pedagogical point of view. The more quickly you can learn the basic morphology and lexicon, the sooner you can begin the process of intuiting the higher-level rules and social conventions that govern larger units of discourse.
However, the whole point is that in order to speak in a way that will sound natural and grammatical to fluent speakers, you have to internalize all those incredibly complicated points of syntax and semantics, which have developed naturally with time.
Due to a large amount of basic structure common to all human language, it’s usually not that hard to learn how to sound grammatical. The difficult part of acquiring a new language is learning how to sound idiomatic. And this basically amounts to learning a new set of social conventions. So there may not be much that language-planning per se can do to facilitate this aspect of language-learning—which may be a large part of your point. But I would emphasize that the issue here is more sociological than linguistic: it isn’t that the structure of the human language apparatus prevents us from creating languages that are easier to learn than existing natural languages—after all, existing languages are not optimized for ease of learning, especially as second languages. It’s just that constructing a grammar is not the same as constructing the conventions and norms of a speech community, and the latter may be a more difficult task.
(Even when it comes to inflectional morphology, assuming a lively community of Esperanto speakers persists into the future, how long do you think it will take before common contractions start grammaticalizing into rudimentary irregular inflections?)
This kind of drift will presumably happen given enough time, but it’s worth noting that (for obvious reasons) Esperantists tend to be more disciplined about maintaining the integrity of the language than is typical among speakers of most languages, and they’ve been pretty successful so far.
This kind of drift will presumably happen given enough time, but it’s worth noting that (for obvious reasons) Esperantists tend to be more disciplined about maintaining the integrity of the language than is typical among speakers of most languages, and they’ve been pretty successful so far.
One advantage Esperanto has over natural language, is that nearly all of its speakers speak it as a second language. That is way most of its learners are self-consciously trying to maintain its integrity.
I agree. However, making something look less scary in the beginning still constitutes an improvement from a pedagogical point of view. The more quickly you can learn the basic morphology and lexicon, the sooner you can begin the process of intuiting the higher-level rules and social conventions that govern larger units of discourse.
That is true. One of my pet theories is that at beginner and intermediate levels, simple inflectional morphology fools people into overestimating how good they are, which gives them more courage and confidence to speak actively, and thus helps them improve with time. With more synthetic languages, people are more conscious of how broken their speech is, so they’re more afraid and hesitant. But if you somehow manage to eliminate the fear, the advantage of analytic languages disappears.
Due to a large amount of basic structure common to all human language, it’s usually not that hard to learn how to sound grammatical. The difficult part of acquiring a new language is learning how to sound idiomatic.
Here I disagree. Even after you learn to sound idiomatic in a foreign language, there will still be some impossibly convoluted issues of grammar (usually syntax) where you’ll occasionally make mistakes that make any native speaker cringe at how ungrammatical your utterance is. For example, the definite article and choice of prepositions in English are in this category. Another example are the already mentioned Slavic verbal aspects. (Getting them wrong sounds really awful, but it’s almost impossible for non-native speakers, even very proficient ones, to get them right consistently. Gallons of ink have been spent trying to formulate clear and complete rules, without much success.)
I don’t know if any work has been done to analyze these issues from an evolutionary perspective, but it seems pretty clear to me that the human brain has an in-built functionality that recognizes even the slightest flaws in pronunciation and grammar characteristic of foreigners and raises a red flag. (This generalizes to all sorts of culture-specific behaviors, of course, including how idiomatic one’s speech is.) I strongly suspect that the language of any community, even if it starts as a constructed language optimized for ease of learning by outsiders, will soon naturally develop these shibboleth-generating properties. (These are also important when it comes to different sociolects and registers within a community, of course.)
As far as I see, the closest thing to what you propose is mathematical notation (and other sorts of formal scientific notation). Sure, if you can figure out a more useful and convenient notation for some concrete problem, more power to you. However, at least judging by the historical experience, to do that you need some novel insight that motivates the introduction of new notation. Doing things the opposite way, i.e. trying to purify and improve your language in some general way hoping that this will open or at least facilitate new insight, is unlikely to lead you anywhere.
The purpose of this reply, relative to my post, is ambiguous to me. I’m unsure if you’re proposing that nothing about our language need change in order to end up with correct answers about the “big problems”, or if this is simply a related but tangential opinion. Could you clarify? And no, I’m not saying this to prove a point :)
you’re proposing that nothing about our language need change in order to end up with correct answers about the “big problems”
That’s exactly what I’m saying, that natural language isn’t broken, and in fact that most of what Lojbanists (and other people who complain about natural language being ambiguous) see as flaws are actually features. Most of our brain doesn’t have a rigid logical map, so why have a rigid language?
It still seems to me that correct answers to the big problems do require a rigid logical map, and the fact that our brain does not operate on strict logic is besides the point. It may be completely impossible for humans to create/learn/use in practice such a language, and if so perhaps we are actually doomed, but I’d like to fork that into a separate discussion. And as I posted in a response to Vladimir, if it helps clarify my question, I don’t propose a widely-used language, only a highly specialized one created to work on FAI, and/or dissolving “philosophical” issues, essentially.
I’d love to see a more detailed analysis of your position; as I implied earlier, your bullet points don’t seem to address my central question, unless I’m just not making the right connections. It sounds like you’ve discussed this with others in the past, any conversations you could link me to, perhaps?
I may have read too much into the first and second sentences of your post—I felt that you were suggesting that the only way for us to achieve sufficient rationality to work on FAI or solve important problems would be to start using Lojban (or similar) all the time.
So my response to using a language purely for working on FAI is much the same as Vladimir’s—sounds like you’re talking more about a set of conventions like predicate logic or maths notation than a language per se. Saddling it with the ‘language’ label is going to lead to lots of excess baggage, because languages as a whole need to do a lot of work.
It sounds like you’ve discussed this with others in the past
It’s the argument nearly anyone with any linguistic knowledge will have with countless people who think that language would be so much better if it was less ambiguous and we could just say exactly what we meant all the time. No convenient links though, sad to say
It still seems to me that correct answers to the big problems do require a rigid logical map
Apologies, I can see how you would have assumed that, my OP wasn’t as clearly formed as I thought.
I think one of my main confusions may be ignorance of how dependent DT, things like CEV, and metaethics are on actual language, rather than being expressed in such a mathematical notation that is uninfluenced by potentially critical ambiguities inherent in evolved language. My OP actually stemmed from jimrandomh’s comment here, specifically jim’s concerns about fuzzy language in DT. I have to confess I’m (hopefully understandably) not up to the challenge of fully understanding the level of work jim and Eliezer and others are operating on, so this (language dependence) is very hard for me to judge.
It really bothers me when I see proposals to ‘fix’ language because as far as I’m concerned, natural languages are well-adapted to their environment.
The purpose of language, insofar as it has a specific purpose, is to get other people to do things. To get you to think of a concept the same way I do, to make you feel a specific emotion, to induce you to make me a sandwich, or whatever.
People’s brains don’t operate using strict logic. We’re extremely good at pattern matching from noisy and ambiguous data, in a way that programs have yet to approach. eg. Google’s probabilistic search correction does well at guessing what you meant to type when you produce an ambiguous or malformed string, but it can’t infer that since all your searches in the last few minutes were all clustered around the topic of clinical psychology, your current search term of “hysterical” is probably meant to refer to the old psychiatric concept and not the modern usage of hysterical = funny. A human would have much less trouble working that out because they have a mental model of the current conversation that indicates that the technical definition of the word is relevant.
This is why it’s not only ok, but in fact good for language to have a lot of built-in ambiguity—given the hardware that it runs on, it’s much more efficient for me to rattle off an ambiguous sentence whose meaning is made clear through context, than it is for me to construct a sentence which is redundantly unambiguous due to our shared environment. Furthermore, my communicating in a sentence which is redundantly unambiguous carries the connotation that I have a low opinion of your ability to understand me, otherwise why would I put myself out so much to encode my meaning?
Lojban isn’t nearly as unambiguous and logical as its creators wanted it to be. While it’s true that its syntax is computer-readable, there is little to no improvement on the semantic level. And the pragmatic level of language is completely language-independent—there is always going to be a way to be unnecessarily vague, to be sarcastic, to misdirect, and so on, because those devices allow us to communicate nuances about our attitudes towards the subject of conversation and signal various personal traits in a way that straightforward communication doesn’t support. So despite Lojban having the capacity to be clearer than most natural language, that’s not how it will be used conversationally by any fluent speakers. And the same goes for any future constructed languages.
Taboos and neologisms don’t work in the long-term, because human language evolves over time. Consider a few topics that we currently have taboos about: sex and defecation, and how many euphemisms we have to talk about them. The reason we have so many is that as each euphemism reaches peak usage, its meaning becomes too closely connected to the concept it was meant to stand in for. It’s then replaced by a new untainted euphemism and the process repeats. Similarly, neologisms, once released into the wild, will take on new connotations or change meaning altogether.
And of course, if humans actively use some language that’s very different from natural languages in any important respect, it will soon get creolized until it looks like just another ordinary human language.
This is what happened to e.g. Esperanto: it was supposed to be extraordinarily simple and regular, but once it caught up with a real community of speakers, it underwent a rapid evolution towards a natural language compatible with the human brain hardware, and became just as messy and complicated as any other. (Esperantists still advertise their language as supposedly specified by a few simple rules, but grammar books of real fluent Esperanto are already phone book-thick, and probably nowhere near complete.)
This contains a kernel of truth, but is also highly misleading in some important respects. Esperanto is extraordinarily simple and regular; the famous Sixteen Rules, while obviously not a complete description of the grammar of the language, still hold today as much as they did in 1887. To an uninformed reader, your comment may imply that Esperanto has perhaps since then evolved the same kind of morphological irregularities that we find in “natural” languages, but this isn’t the case. There are no irregular inflections (e.g. verb conjugations or noun declensions), and the regular ones are simple indeed by comparison with many other languages. This significantly cuts down on the amount of rote memorization required to attain a working command of the language; and this is without mentioning the freedom in word-building that is allowed by the system of compounds and affixes.
What is true is that there are many linguistic features of Esperanto that aren’t systematically standardized. But these are largely the kinds of features that only linguists tend to think about explicitly; L.L. Zamenhof, the creator of Esperanto, was a 19th-century oculist and amateur philologist, not a 20th-century academic linguist. As a result, he simply didn’t think to invent things like a systematic phonology or register conventions for Esperanto; and so these things have been developed by speakers of the language over time, in the way they naturally do among humans. The thick grammar books you speak of are no doubt descriptions of such features. But these aren’t the kind of books people use to learn any language, Esperanto included; and if you compare actual pedagogical books on Esperanto to those on “natural” languages, you will find that they are simpler.
From my experience with learning several foreign languages, morphological irregularities look scary in the beginning, but they completely pale in comparison with the complexity and irregularity of syntax and semantics. There are many natural languages with very little morphological complexity, but these aren’t any easier to learn to speak like a native. (On the other hand, for example, Slavic languages have very complicated and irregular inflectional morphology, but you’ll learn to recite all the conjugations and declensions back and forth sooner than you’ll figure out how to choose between the verbal aspects even approximately right.)
However, the whole point is that in order to speak in a way that will sound natural and grammatical to fluent speakers, you have to internalize all those incredibly complicated points of syntax and semantics, which have developed naturally with time. Of course that nobody except linguists thinks about these rules explicitly, but fluent speakers judge instinctively whether a given utterance is grammatical based on them (and the linguist’s challenge is in fact to reverse-engineer these intuitions into explicit rules).
(Even when it comes to inflectional morphology, assuming a lively community of Esperanto speakers persists into the future, how long do you think it will take before common contractions start grammaticalizing into rudimentary irregular inflections?)
I agree. However, making something look less scary in the beginning still constitutes an improvement from a pedagogical point of view. The more quickly you can learn the basic morphology and lexicon, the sooner you can begin the process of intuiting the higher-level rules and social conventions that govern larger units of discourse.
Due to a large amount of basic structure common to all human language, it’s usually not that hard to learn how to sound grammatical. The difficult part of acquiring a new language is learning how to sound idiomatic. And this basically amounts to learning a new set of social conventions. So there may not be much that language-planning per se can do to facilitate this aspect of language-learning—which may be a large part of your point. But I would emphasize that the issue here is more sociological than linguistic: it isn’t that the structure of the human language apparatus prevents us from creating languages that are easier to learn than existing natural languages—after all, existing languages are not optimized for ease of learning, especially as second languages. It’s just that constructing a grammar is not the same as constructing the conventions and norms of a speech community, and the latter may be a more difficult task.
This kind of drift will presumably happen given enough time, but it’s worth noting that (for obvious reasons) Esperantists tend to be more disciplined about maintaining the integrity of the language than is typical among speakers of most languages, and they’ve been pretty successful so far.
One advantage Esperanto has over natural language, is that nearly all of its speakers speak it as a second language. That is way most of its learners are self-consciously trying to maintain its integrity.
That is true. One of my pet theories is that at beginner and intermediate levels, simple inflectional morphology fools people into overestimating how good they are, which gives them more courage and confidence to speak actively, and thus helps them improve with time. With more synthetic languages, people are more conscious of how broken their speech is, so they’re more afraid and hesitant. But if you somehow manage to eliminate the fear, the advantage of analytic languages disappears.
Here I disagree. Even after you learn to sound idiomatic in a foreign language, there will still be some impossibly convoluted issues of grammar (usually syntax) where you’ll occasionally make mistakes that make any native speaker cringe at how ungrammatical your utterance is. For example, the definite article and choice of prepositions in English are in this category. Another example are the already mentioned Slavic verbal aspects. (Getting them wrong sounds really awful, but it’s almost impossible for non-native speakers, even very proficient ones, to get them right consistently. Gallons of ink have been spent trying to formulate clear and complete rules, without much success.)
I don’t know if any work has been done to analyze these issues from an evolutionary perspective, but it seems pretty clear to me that the human brain has an in-built functionality that recognizes even the slightest flaws in pronunciation and grammar characteristic of foreigners and raises a red flag. (This generalizes to all sorts of culture-specific behaviors, of course, including how idiomatic one’s speech is.) I strongly suspect that the language of any community, even if it starts as a constructed language optimized for ease of learning by outsiders, will soon naturally develop these shibboleth-generating properties. (These are also important when it comes to different sociolects and registers within a community, of course.)
I don’t propose a widely-used language, only a highly specialized one created to work on FAI, and/or dissolving “philosophical” issues, essentially.
As far as I see, the closest thing to what you propose is mathematical notation (and other sorts of formal scientific notation). Sure, if you can figure out a more useful and convenient notation for some concrete problem, more power to you. However, at least judging by the historical experience, to do that you need some novel insight that motivates the introduction of new notation. Doing things the opposite way, i.e. trying to purify and improve your language in some general way hoping that this will open or at least facilitate new insight, is unlikely to lead you anywhere.
Please see my response to erratio here.
The purpose of this reply, relative to my post, is ambiguous to me. I’m unsure if you’re proposing that nothing about our language need change in order to end up with correct answers about the “big problems”, or if this is simply a related but tangential opinion. Could you clarify? And no, I’m not saying this to prove a point :)
That’s exactly what I’m saying, that natural language isn’t broken, and in fact that most of what Lojbanists (and other people who complain about natural language being ambiguous) see as flaws are actually features. Most of our brain doesn’t have a rigid logical map, so why have a rigid language?
It still seems to me that correct answers to the big problems do require a rigid logical map, and the fact that our brain does not operate on strict logic is besides the point. It may be completely impossible for humans to create/learn/use in practice such a language, and if so perhaps we are actually doomed, but I’d like to fork that into a separate discussion. And as I posted in a response to Vladimir, if it helps clarify my question, I don’t propose a widely-used language, only a highly specialized one created to work on FAI, and/or dissolving “philosophical” issues, essentially.
I’d love to see a more detailed analysis of your position; as I implied earlier, your bullet points don’t seem to address my central question, unless I’m just not making the right connections. It sounds like you’ve discussed this with others in the past, any conversations you could link me to, perhaps?
I may have read too much into the first and second sentences of your post—I felt that you were suggesting that the only way for us to achieve sufficient rationality to work on FAI or solve important problems would be to start using Lojban (or similar) all the time.
So my response to using a language purely for working on FAI is much the same as Vladimir’s—sounds like you’re talking more about a set of conventions like predicate logic or maths notation than a language per se. Saddling it with the ‘language’ label is going to lead to lots of excess baggage, because languages as a whole need to do a lot of work.
It’s the argument nearly anyone with any linguistic knowledge will have with countless people who think that language would be so much better if it was less ambiguous and we could just say exactly what we meant all the time. No convenient links though, sad to say
Such as decision theories?
Apologies, I can see how you would have assumed that, my OP wasn’t as clearly formed as I thought.
I think one of my main confusions may be ignorance of how dependent DT, things like CEV, and metaethics are on actual language, rather than being expressed in such a mathematical notation that is uninfluenced by potentially critical ambiguities inherent in evolved language. My OP actually stemmed from jimrandomh’s comment here, specifically jim’s concerns about fuzzy language in DT. I have to confess I’m (hopefully understandably) not up to the challenge of fully understanding the level of work jim and Eliezer and others are operating on, so this (language dependence) is very hard for me to judge.