I don’t think the analogy holds. The reason Kraft’s inequality works is that the number of possible strings of length n over a b-symbol alphabet is exactly b^n. This places a bound on the number of short words you can have. Whereas if we’re going to talk about the “amount of mental content” we pack into a single “concept-needing-little-explanation,” I don’t see any analogous bound: I don’t see any reason in principle why a mind of arbitrary size couldn’t have an arbitrary number of complicated “short” concepts.
For concreteness, consider that in technical disciplines, we often speak and think in terms of “short” concepts that would take a lot of time to explain to outsiders. For example, eigenvalues. The idea of an eigenvalue is “short” in the sense that we treat it as a basic conceptual unit, but “complicated” in the sense that it’s built out of a lot of prerequisite knowledge about linear transformations. Why couldn’t a mind create an arbitrary number of such conceptual “chunks”? Or if my model of what it means for a concept to be “short” is wrong, then what do you mean?
I note that my thinking here feels confused; this topic may be too advanced for me to discuss sanely.
I don’t think the analogy holds. The reason Kraft’s inequality works is that the number of possible strings of length n over a b-symbol alphabet is exactly b^n. This places a bound on the number of short words you can have. Whereas if we’re going to talk about the “amount of mental content” we pack into a single “concept-needing-little-explanation,” I don’t see any analogous bound: I don’t see any reason in principle why a mind of arbitrary size couldn’t have an arbitrary number of complicated “short” concepts.
For concreteness, consider that in technical disciplines, we often speak and think in terms of “short” concepts that would take a lot of time to explain to outsiders. For example, eigenvalues. The idea of an eigenvalue is “short” in the sense that we treat it as a basic conceptual unit, but “complicated” in the sense that it’s built out of a lot of prerequisite knowledge about linear transformations. Why couldn’t a mind create an arbitrary number of such conceptual “chunks”? Or if my model of what it means for a concept to be “short” is wrong, then what do you mean?
I note that my thinking here feels confused; this topic may be too advanced for me to discuss sanely.