I guess I don’t see much support for such mutual dependence. Other animals have working memory + finite state control, and learn from experience in flexible ways. It appears pretty useful to them despite the fact they don’t have language/culture. The vast majority of our useful computing is done by systems that have Turing-completeness but not language/cultural competence. Language models sure look like they have language ability without Turing-completeness and without having picked up some “universal planning algorithm” that would render our previous work w/ NNs ~useless.
Why choose a theory like “the capability gap between humans and other animals is because the latter is missing language/culture and also some binary GI property” over one like “the capability gap between humans and other animals is just because the latter is missing language/culture”? IMO the latter is simpler and better fits the evidence.
Hmm, we may have reached the point from which we’re not going to move on without building mathematical frameworks and empirically testing them, or something.
Other animals have working memory + finite state control, and learn from experience in flexible ways
“Learn from experience” is the key point. Abstract thinking allows to learn without experience — from others’ experience that they communicate to you, or from just figuring out how something works abstractly and anticipating the consequences in advance of them occurring. This sort of learning, I claim, is only possible when you have the machinery for generating entirely novel abstractions (language, math, etc.), which in turn is only useful if you have a planning algorithm capable of handling any arbitrary abstraction you may spin up.
“The capability gap between humans and other animals is because the latter is missing language/culture and also some binary GI property” and “the capability gap between humans and other animals is just because the latter is missing language/culture” are synonymous, in my view, because you can’t have language/culture without the binary GI property.
Language models sure look like they have language ability
As per the original post, I disagree that they have the language ability in the relevant sense. I think they’re situated firmly on the Simulacrum Level 4; they appear to communicate, but it’s all just reflexes.
I didn’t mean “learning from experience” to be restrictive in that way. Animals learn by observing others & from building abstract mental models too. But unless one acquires abstracted knowledge via communication, learning requires some form of experience: even abstracted knowledge is derived from experience, whether actual or imagined. Moreover, I don’t think that some extra/different planning machinery was required for language itself, beyond the existing abstraction and model-based RL capabilities that many other animals share. But ultimately that’s an empirical question.
Hmm, we may have reached the point from which we’re not going to move on without building mathematical frameworks and empirically testing them, or something.
Yeah I am probably going to end my part of the discussion tree here.
My overall take remains:
There may be general purpose problem-solving strategies that humans and non-human animals alike share, which explain our relative capability gains when combined with the unlocks that came from language/culture.
We don’t need any human-distinctive “general intelligence” property to explain the capability differences among human-, non-human animal-, and artificial systems, so we shouldn’t assume that there’s any major threshold ahead of us corresponding to it.
Moreover, I don’t think that some extra/different planning machinery was required for language itself, beyond the existing abstraction and model-based RL capabilities that many other animals share.
I would expect to see sophisticated ape/early-hominid-lvl culture in many more species if that was the case. For some reason humans went on the culture RSI trajectory whereas other animals didn’t. Plausibly there was some seed cognitive ability (plus some other contextual enablers) that allowed a gene-culture “coevolution” cycle to start.
I guess I don’t see much support for such mutual dependence. Other animals have working memory + finite state control, and learn from experience in flexible ways. It appears pretty useful to them despite the fact they don’t have language/culture. The vast majority of our useful computing is done by systems that have Turing-completeness but not language/cultural competence. Language models sure look like they have language ability without Turing-completeness and without having picked up some “universal planning algorithm” that would render our previous work w/ NNs ~useless.
Why choose a theory like “the capability gap between humans and other animals is because the latter is missing language/culture and also some binary GI property” over one like “the capability gap between humans and other animals is just because the latter is missing language/culture”? IMO the latter is simpler and better fits the evidence.
Hmm, we may have reached the point from which we’re not going to move on without building mathematical frameworks and empirically testing them, or something.
“Learn from experience” is the key point. Abstract thinking allows to learn without experience — from others’ experience that they communicate to you, or from just figuring out how something works abstractly and anticipating the consequences in advance of them occurring. This sort of learning, I claim, is only possible when you have the machinery for generating entirely novel abstractions (language, math, etc.), which in turn is only useful if you have a planning algorithm capable of handling any arbitrary abstraction you may spin up.
“The capability gap between humans and other animals is because the latter is missing language/culture and also some binary GI property” and “the capability gap between humans and other animals is just because the latter is missing language/culture” are synonymous, in my view, because you can’t have language/culture without the binary GI property.
As per the original post, I disagree that they have the language ability in the relevant sense. I think they’re situated firmly on the Simulacrum Level 4; they appear to communicate, but it’s all just reflexes.
I didn’t mean “learning from experience” to be restrictive in that way. Animals learn by observing others & from building abstract mental models too. But unless one acquires abstracted knowledge via communication, learning requires some form of experience: even abstracted knowledge is derived from experience, whether actual or imagined. Moreover, I don’t think that some extra/different planning machinery was required for language itself, beyond the existing abstraction and model-based RL capabilities that many other animals share. But ultimately that’s an empirical question.
Yeah I am probably going to end my part of the discussion tree here.
My overall take remains:
There may be general purpose problem-solving strategies that humans and non-human animals alike share, which explain our relative capability gains when combined with the unlocks that came from language/culture.
We don’t need any human-distinctive “general intelligence” property to explain the capability differences among human-, non-human animal-, and artificial systems, so we shouldn’t assume that there’s any major threshold ahead of us corresponding to it.
I would expect to see sophisticated ape/early-hominid-lvl culture in many more species if that was the case. For some reason humans went on the culture RSI trajectory whereas other animals didn’t. Plausibly there was some seed cognitive ability (plus some other contextual enablers) that allowed a gene-culture “coevolution” cycle to start.