Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.
It’s likely literally not true, but if it was … this proves my point, doesn’t it?
“Symbolic language” is exactly the type of innovation which can be discontinuous, has a type “code” more than “data quantity”, and unlocks many other things. For example more rapid and robust horizontal synchronization of brains (eg when hunting). Or yes, jump in effective quantity of information transmitted via other signals in time.
At the same time …could be clearly discontinuous: you can teach actual apes sign language, and it seems plausible this would make them more fit, if done in the wild.
(It’s actually somewhat funny that Eric Drexler has a hundred page report based exactly on the premise “AI models using human language is obviously stupid inefficiency, and you can make a jump in efficiency with more native-architecture-friendly format”.
This does not seem obviously stupid: e.g, now, if you want one model to transfer some implicit knowledge it learned, the way to do it is use the ML-native model to generate shitload of human natural language examples, and train the other model on it, building the native representation again.)
It’s likely literally not true, but if it was … this proves my point, doesn’t it?
“Symbolic language” is exactly the type of innovation which can be discontinuous, has a type “code” more than “data quantity”, and unlocks many other things. For example more rapid and robust horizontal synchronization of brains (eg when hunting). Or yes, jump in effective quantity of information transmitted via other signals in time.
At the same time …could be clearly discontinuous: you can teach actual apes sign language, and it seems plausible this would make them more fit, if done in the wild.
(It’s actually somewhat funny that Eric Drexler has a hundred page report based exactly on the premise “AI models using human language is obviously stupid inefficiency, and you can make a jump in efficiency with more native-architecture-friendly format”.
This does not seem obviously stupid: e.g, now, if you want one model to transfer some implicit knowledge it learned, the way to do it is use the ML-native model to generate shitload of human natural language examples, and train the other model on it, building the native representation again.)