If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial.
Every part of this sounds false to me. I don’t expect current trends to hold through a transition to AGI; I’d guess LLMs will not be a pillar in creating AGI; and I am very confident that these two claims are not uncontroversial, either among alignment researchers, within the existential risk ecosystem, or in ML.
Language is one defining aspect of intelligence in general and human intelligence in particular. That an AGI wouldn’t utilize the capability of LLM’s doesn’t seem credible. The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers. The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM’s.
Every part of this sounds false to me. I don’t expect current trends to hold through a transition to AGI; I’d guess LLMs will not be a pillar in creating AGI; and I am very confident that these two claims are not uncontroversial, either among alignment researchers, within the existential risk ecosystem, or in ML.
Do you still hold this belief?
Language is one defining aspect of intelligence in general and human intelligence in particular. That an AGI wouldn’t utilize the capability of LLM’s doesn’t seem credible. The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers. The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM’s.