It certainly seems intuitively better to do that (have many meta-levels of delegation, instead of only one), since one can imagine particular cases in which it helps. In fact we did some of that (see Appendix E).
But this doesn’t really fundamentally solve the problem Abram quotes in any way. You add more meta-levels in-between the selector and the executor, thus you get more lines of protection against updating on infohazards, but you also get more silly decisions from the very-early selector. The trade-off between infohazard protection and not-being-silly remains. The quantitative question of “how fast should f grow” remains.
And of course, we can look at reality, or also check our human intuitions, and discover that, for some reason, this or that kind of f, or kind of delegation procedure, tends to work better in our distribution. But the general problem Abram quotes is fundamentally unsolvable. “The chaos of a too-early market state” literally equals “not having updated on enough information”. “Knowledge we need to be updateless toward” literally equals “having updated on too much information”. You cannot solve this problem in full generality, except if you already know exactly what information you want to update on… which means, either already having thought long and hard about it (thus you updated on everything), or you lucked into the right prior without thinking.
Thus, Abram is completely right to mention that we have to think about the human prior, and our particular distribution, as opposed to search for a general solution that we can prove mathematical things about.
It certainly seems intuitively better to do that (have many meta-levels of delegation, instead of only one), since one can imagine particular cases in which it helps. In fact we did some of that (see Appendix E).
But this doesn’t really fundamentally solve the problem Abram quotes in any way. You add more meta-levels in-between the selector and the executor, thus you get more lines of protection against updating on infohazards, but you also get more silly decisions from the very-early selector. The trade-off between infohazard protection and not-being-silly remains. The quantitative question of “how fast should f grow” remains.
And of course, we can look at reality, or also check our human intuitions, and discover that, for some reason, this or that kind of f, or kind of delegation procedure, tends to work better in our distribution. But the general problem Abram quotes is fundamentally unsolvable. “The chaos of a too-early market state” literally equals “not having updated on enough information”. “Knowledge we need to be updateless toward” literally equals “having updated on too much information”. You cannot solve this problem in full generality, except if you already know exactly what information you want to update on… which means, either already having thought long and hard about it (thus you updated on everything), or you lucked into the right prior without thinking.
Thus, Abram is completely right to mention that we have to think about the human prior, and our particular distribution, as opposed to search for a general solution that we can prove mathematical things about.