Performance deteriorating implies that the prior p is not yet a fixed point of p*=D(A(p*)).
At least in the case of AlphaZero, isn’t the performance deterioration from A(p*) to p*? I.e. A(p*) is full AlphaZero, while p* is the “Raw Network” in the figure. We could have converged to the fixed point of the training process (i.e. p*=D(A(p*))) and still have performance deterioration if we use the unamplified model compared to the amplified one. I don’t see a fundamental reason why p* = A(p*) should hold after convergence (and I would have been surprised if it held for e.g. chess or Go and reasonably sized models for p*).
At least in the case of AlphaZero, isn’t the performance deterioration from A(p*) to p*? I.e. A(p*) is full AlphaZero, while p* is the “Raw Network” in the figure. We could have converged to the fixed point of the training process (i.e. p*=D(A(p*))) and still have performance deterioration if we use the unamplified model compared to the amplified one. I don’t see a fundamental reason why p* = A(p*) should hold after convergence (and I would have been surprised if it held for e.g. chess or Go and reasonably sized models for p*).
That… makes a lot of sense. Yep, that’s probably the answer! Thank you :)