I argue for the former in the section “Linguistic capability circuits inside LLM-based AI could be sufficient for approximating general intelligence”. Insisting that AGI action must be a single Transformer inference is pointless: sure, The Bitter Lesson suggests that things will eventually converge in that direction, but first AGI will unlikely be like that.
Then I misread this section as arguing that LLM could yada yada, not that it was likely. Would you like to bet?
Yes, we agree not to care about completing single inference with what I called more or less minor tricks, like using a context document telling to play the role of, say, a three-headed lizardwoman from Venus (say it fits your parental caring needs better than Her).
I argue for the former in the section “Linguistic capability circuits inside LLM-based AI could be sufficient for approximating general intelligence”. Insisting that AGI action must be a single Transformer inference is pointless: sure, The Bitter Lesson suggests that things will eventually converge in that direction, but first AGI will unlikely be like that.
Then I misread this section as arguing that LLM could yada yada, not that it was likely. Would you like to bet?
Yes, we agree not to care about completing single inference with what I called more or less minor tricks, like using a context document telling to play the role of, say, a three-headed lizardwoman from Venus (say it fits your parental caring needs better than Her).