If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial. I think one can then argue that after ignoring the circus around sentience means we should focus on the new current capabilities: reasoning, and intricate knowledge of how humans think and communicate. From that perspective there is a now a much stronger argument for manipulation. Outwitting humans prior to having human level intelligence is a major coup. PaLM’s ability to explain a joke and reason it’s way to understanding the collective human psyche is as impressive to me as Alphafold.
The goal posts have been moved on the Turing test and it’s important to point that out in any debate. Those in the liberal A.I. bias community have to at least acknowledge the particular danger of these LLM models that can reason positions. Naysayers need to have it thrown in their face that these lesser non-sentient, neural networks trained on a collection of words will eventually out debate them.
If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial.
Every part of this sounds false to me. I don’t expect current trends to hold through a transition to AGI; I’d guess LLMs will not be a pillar in creating AGI; and I am very confident that these two claims are not uncontroversial, either among alignment researchers, within the existential risk ecosystem, or in ML.
Language is one defining aspect of intelligence in general and human intelligence in particular. That an AGI wouldn’t utilize the capability of LLM’s doesn’t seem credible. The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers. The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM’s.
If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial. I think one can then argue that after ignoring the circus around sentience means we should focus on the new current capabilities: reasoning, and intricate knowledge of how humans think and communicate. From that perspective there is a now a much stronger argument for manipulation. Outwitting humans prior to having human level intelligence is a major coup. PaLM’s ability to explain a joke and reason it’s way to understanding the collective human psyche is as impressive to me as Alphafold.
The goal posts have been moved on the Turing test and it’s important to point that out in any debate. Those in the liberal A.I. bias community have to at least acknowledge the particular danger of these LLM models that can reason positions. Naysayers need to have it thrown in their face that these lesser non-sentient, neural networks trained on a collection of words will eventually out debate them.
Every part of this sounds false to me. I don’t expect current trends to hold through a transition to AGI; I’d guess LLMs will not be a pillar in creating AGI; and I am very confident that these two claims are not uncontroversial, either among alignment researchers, within the existential risk ecosystem, or in ML.
Do you still hold this belief?
Language is one defining aspect of intelligence in general and human intelligence in particular. That an AGI wouldn’t utilize the capability of LLM’s doesn’t seem credible. The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers. The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM’s.