On the literature that addresses your question: here is a classic LW post on this sort of question.
The linked post doesn’t seem to answer it, e.g. in the 4th paragraph EY says:
Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares—the receiver has already stored the complexity in them.
I also don’t think it fully addresses the question—or even partially in a useful way, e.g. EY says:
It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.
The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output.
But this bakes in knowledge about measuring stuff. Maxwell’s equations are—in part—easier to code because we have a way to describe measurements that’s easy to compute. That representation is via an abstraction layer! It uses labels for concepts too.
The linked post doesn’t seem to answer it, e.g. in the 4th paragraph EY says:
I also don’t think it fully addresses the question—or even partially in a useful way, e.g. EY says:
But this bakes in knowledge about measuring stuff. Maxwell’s equations are—in part—easier to code because we have a way to describe measurements that’s easy to compute. That representation is via an abstraction layer! It uses labels for concepts too.