I think the issue is that vanilla HCH itself is susceptible to brief circular arguments, if humans lower down in the tree don’t get access to the context from humans higher up in the tree. E.g. assume a chain of humans for now:
H1 gets the question “what is 100 + 100?” with budget 3
H1 asks H2 “what is 2 * 100?” with budget 2
H2 asks H3 “what is 100 + 100?” with budget 1
H3 says “150”
(Note the final answer stays the same as budget → infinity, as long as H continues “decomposing” the question the same way.)
If HCH can always decompose questions into “smaller” parts (the DAG assumption) then this sort of pathological behavior doesn’t happen.
I think the issue is that vanilla HCH itself is susceptible to brief circular arguments, if humans lower down in the tree don’t get access to the context from humans higher up in the tree. E.g. assume a chain of humans for now:
H1 gets the question “what is 100 + 100?” with budget 3
H1 asks H2 “what is 2 * 100?” with budget 2
H2 asks H3 “what is 100 + 100?” with budget 1
H3 says “150”
(Note the final answer stays the same as budget → infinity, as long as H continues “decomposing” the question the same way.)
If HCH can always decompose questions into “smaller” parts (the DAG assumption) then this sort of pathological behavior doesn’t happen.