I think the point of Yudkowsky’s post wasn’t particularly about feasibility of building things without understanding, it’s instead about unfortunate salience of lines of inquiry that don’t lead to deconfusion. If building an arithmetic-capable LLM doesn’t deconfuse arithmetic, then this wasn’t a way of understanding arithmetic, even if the project does succeed. Similarly with intelligence.
There is already existence of humans that are capable of all the things, built by natural selection without understanding and offering little deconfusion of the capabilities even from the inside, to humans themselves. So a further example of existence of LLMs isn’t much evidence of anything in this vein.
I think the point of Yudkowsky’s post wasn’t particularly about feasibility of building things without understanding, it’s instead about unfortunate salience of lines of inquiry that don’t lead to deconfusion. If building an arithmetic-capable LLM doesn’t deconfuse arithmetic, then this wasn’t a way of understanding arithmetic, even if the project does succeed. Similarly with intelligence.
There is already existence of humans that are capable of all the things, built by natural selection without understanding and offering little deconfusion of the capabilities even from the inside, to humans themselves. So a further example of existence of LLMs isn’t much evidence of anything in this vein.