Good point, and I totally agree. I want to touch on this in a more complete meditation on the likely uses. One thing is that the amount of cognition happening in LLMs is tricky to guess at and impossible to know for sure, at least until we get huge improvements to interpretability. But it is probably relative to the complexity of the prompt. Putting more of the cognition into the way the wrapper elicits useful cognition from the LLM in small steps will probably help shift that balance toward interpretability.
Good point, and I totally agree. I want to touch on this in a more complete meditation on the likely uses. One thing is that the amount of cognition happening in LLMs is tricky to guess at and impossible to know for sure, at least until we get huge improvements to interpretability. But it is probably relative to the complexity of the prompt. Putting more of the cognition into the way the wrapper elicits useful cognition from the LLM in small steps will probably help shift that balance toward interpretability.