There was some related discussion here, to the effect that we could do something to try to make the AGI as verbal a thinker as possible, IIUC. (I endorse that as plausibly a good idea worth thinking about and trying. I don’t see it as sufficient / airtight.)
Though note that “we could do something to try to make the AGI as verbal a thinker as possible” is a far weaker claim than “A brain-like AGI—modeled after our one working example of efficient general intelligence—would naturally have an interpretable inner monologue we could monitor.”. The corresponding engineering problem is much harder, if we have to do something special to make the AGI think mostly verbally. Also, the existence of verbal-reasoning-heavy humans is not particularly strong evidence that we can make “most” of the load-bearing thought-process verbal; it still seems to me like approximately all of the key “hard parts” of cognition happen on a non-verbal level even in the most verbalization-heavy humans.
The verbal monologue is just near the highest level of compression abstraction in a multi-resolution compressed encoding, but we are not limited to only monitoring at the lowest bitrate (highest level of abstraction/compression).
There is already significant economic pressure on DL system towards being ‘verbal’ thinkers: nearly all largescale image models are now image->text and text<-image models, and the corresponding world_model->text and text<-world_model design is only natural for robotics and AI approaching AGI.
There was some related discussion here, to the effect that we could do something to try to make the AGI as verbal a thinker as possible, IIUC. (I endorse that as plausibly a good idea worth thinking about and trying. I don’t see it as sufficient / airtight.)
Though note that “we could do something to try to make the AGI as verbal a thinker as possible” is a far weaker claim than “A brain-like AGI—modeled after our one working example of efficient general intelligence—would naturally have an interpretable inner monologue we could monitor.”. The corresponding engineering problem is much harder, if we have to do something special to make the AGI think mostly verbally. Also, the existence of verbal-reasoning-heavy humans is not particularly strong evidence that we can make “most” of the load-bearing thought-process verbal; it still seems to me like approximately all of the key “hard parts” of cognition happen on a non-verbal level even in the most verbalization-heavy humans.
The verbal monologue is just near the highest level of compression abstraction in a multi-resolution compressed encoding, but we are not limited to only monitoring at the lowest bitrate (highest level of abstraction/compression).
There is already significant economic pressure on DL system towards being ‘verbal’ thinkers: nearly all largescale image models are now image->text and text<-image models, and the corresponding world_model->text and text<-world_model design is only natural for robotics and AI approaching AGI.