I think that “the role of language in human thought” is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I’m guessing that an AI’s cognitive ability wouldn’t change no matter what human language it’s using, but I’d be interested to know what people doing AI research think about this.
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
Lera Boroditsky is one of the premier researchers on this topic. They’ve also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad—whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I’m wondering is what intelligence would look like if it weren’t constrained by language—if that’s even possible. I need to read/learn more on this topic. I find it really interesting.
I think that “the role of language in human thought” is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I’m guessing that an AI’s cognitive ability wouldn’t change no matter what human language it’s using, but I’d be interested to know what people doing AI research think about this.
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
Lera Boroditsky is one of the premier researchers on this topic. They’ve also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad—whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I’m wondering is what intelligence would look like if it weren’t constrained by language—if that’s even possible. I need to read/learn more on this topic. I find it really interesting.