I<L means that a core property of intelligence is less complicated than the ability to understand language and that this ability, and other abilities that we understand as intellectual can be achieved through the core property. If I<L is true, Eliezer’s metaphor between intelligence and arithmetics is correct as both are less complicated than language. If I<L is true, we would expect it to be easier to create an AI possessing the core property of intelligence, a simple mind, than creating an AI capable of language without having the core property of intelligence.
L<I means language is less complicated than intelligence, for instance, because intelligence is just a bundle term for lots of different abilities which we possess. If L<I is true, than Eliezer’s metaphor doesn’t work which naturally leads to faulty prediction. If L<I is true language is definetely not AGI-complete and we would expect it to be easier to create a good language model than a generally intelligent AI.
Now we can observe that Eliezer’s metaphor produced a bad prediction: despite following the route he deemed “dancing around confusion” we created LLMs before AI with the core intelligence property thus language does not appear to be AGI-complete. It’s an evidence in favour of L<I.
I think the point of Yudkowsky’s post wasn’t particularly about feasibility of building things without understanding, it’s instead about unfortunate salience of lines of inquiry that don’t lead to deconfusion. If building an arithmetic-capable LLM doesn’t deconfuse arithmetic, then this wasn’t a way of understanding arithmetic, even if the project does succeed. Similarly with intelligence.
There is already existence of humans that are capable of all the things, built by natural selection without understanding and offering little deconfusion of the capabilities even from the inside, to humans themselves. So a further example of existence of LLMs isn’t much evidence of anything in this vein.
Lets look at two competing theories: I<L and L<I
I<L means that a core property of intelligence is less complicated than the ability to understand language and that this ability, and other abilities that we understand as intellectual can be achieved through the core property. If I<L is true, Eliezer’s metaphor between intelligence and arithmetics is correct as both are less complicated than language. If I<L is true, we would expect it to be easier to create an AI possessing the core property of intelligence, a simple mind, than creating an AI capable of language without having the core property of intelligence.
L<I means language is less complicated than intelligence, for instance, because intelligence is just a bundle term for lots of different abilities which we possess. If L<I is true, than Eliezer’s metaphor doesn’t work which naturally leads to faulty prediction. If L<I is true language is definetely not AGI-complete and we would expect it to be easier to create a good language model than a generally intelligent AI.
Now we can observe that Eliezer’s metaphor produced a bad prediction: despite following the route he deemed “dancing around confusion” we created LLMs before AI with the core intelligence property thus language does not appear to be AGI-complete. It’s an evidence in favour of L<I.
I think the point of Yudkowsky’s post wasn’t particularly about feasibility of building things without understanding, it’s instead about unfortunate salience of lines of inquiry that don’t lead to deconfusion. If building an arithmetic-capable LLM doesn’t deconfuse arithmetic, then this wasn’t a way of understanding arithmetic, even if the project does succeed. Similarly with intelligence.
There is already existence of humans that are capable of all the things, built by natural selection without understanding and offering little deconfusion of the capabilities even from the inside, to humans themselves. So a further example of existence of LLMs isn’t much evidence of anything in this vein.