I think this makes total sense if you think of “how much intelligence” by fixing an agent-like architecture that wants the goal and then scaling the parts, and “intelligence” is something like the total effort being exerted by those parts.
But it isn’t quite a caveat to formulations (like Bostrom’s) that define “how much intelligence” in terms of external behavior rather than internal structure—maybe you’ve heard definitions like intelligence is how good it is at optimization across a wide range of domains. If that’s your measuring stick, you can scale the world model without changing intelligence, so long as the search process doesn’t output vetter plans on average.
I think this makes total sense if you think of “how much intelligence” by fixing an agent-like architecture that wants the goal and then scaling the parts, and “intelligence” is something like the total effort being exerted by those parts.
But it isn’t quite a caveat to formulations (like Bostrom’s) that define “how much intelligence” in terms of external behavior rather than internal structure—maybe you’ve heard definitions like intelligence is how good it is at optimization across a wide range of domains. If that’s your measuring stick, you can scale the world model without changing intelligence, so long as the search process doesn’t output vetter plans on average.