I tend to frame the issue in terms of “inability to deal with a lot of interconnected layered complexity in the context window”
I think that’s also equivalent to my “remaining on-target across long inferential distances” / “maintaining a clear picture of the task even after its representation becomes very complex in terms of the templates you had memorized at the start”.
But that problem is not exactly the same as a problem with long-horizon agency per se
That’s a fair point, but how many real-life long-horizon-agency problems are of the “clean” type you’re describing?
An additional caveat here is that, even if the task is fundamentally “clean”/tag-team-able, you don’t necessarily know that when working on it. Progressing along it would require knowing what information to discard and what to keep around at each step, and that’s itself nontrivial and might require knowing how to deal with layered complexity.
(Somewhat relatedly, see those thoughts regarding emergent complexity. Even if a given long-horizon-agency task is clean thin line when considered from a fully informed omniscient perspective – a perspective whose ontology is picked to make the task’s description short – that doesn’t mean the bounded system executing the task can maintain a clean representation of it every step of the way.)
I think that’s also equivalent to my “remaining on-target across long inferential distances” / “maintaining a clear picture of the task even after its representation becomes very complex in terms of the templates you had memorized at the start”.
That’s a fair point, but how many real-life long-horizon-agency problems are of the “clean” type you’re describing?
An additional caveat here is that, even if the task is fundamentally “clean”/tag-team-able, you don’t necessarily know that when working on it. Progressing along it would require knowing what information to discard and what to keep around at each step, and that’s itself nontrivial and might require knowing how to deal with layered complexity.
(Somewhat relatedly, see those thoughts regarding emergent complexity. Even if a given long-horizon-agency task is clean thin line when considered from a fully informed omniscient perspective – a perspective whose ontology is picked to make the task’s description short – that doesn’t mean the bounded system executing the task can maintain a clean representation of it every step of the way.)