Can you not meaningfully discuss “this amplification procedure is like an n-depth approximation of HCH at step x”, for any amplification procedure?
No, you can’t. E.g. If your amplification procedure only allows you to ask a single subagent a single question, that will approximate a linear HCH instead of a tree-based HCH. If your amplification procedure doesn’t invoke subagents at all, but instead provides more and more facts to the agent, it doesn’t look anything like HCH. The canonical implementations of iterated amplification are trying to approximate HCH though.
For example, the internal structure of the distilled agent described in Christiano’s paper is unlikely to look anything like a tree. However, my (potentially incorrect?) impression is that the agent’s capabilities at step x are identical to an HCH tree of depth x if the underlying learning system is arbitrarily capable.
No, you can’t. E.g. If your amplification procedure only allows you to ask a single subagent a single question, that will approximate a linear HCH instead of a tree-based HCH. If your amplification procedure doesn’t invoke subagents at all, but instead provides more and more facts to the agent, it doesn’t look anything like HCH. The canonical implementations of iterated amplification are trying to approximate HCH though.
That sounds right to me.