When people call GPT an imitator, it’s intended to guide us to accurate intuitions about how to predict its outputs. If I consider GPT as a token predictor, that does not help me very much to predict whether it will output computer code successfully implementing some obscure modified algorithm I’ve specified in natural language. If I consider it as an imitator, then I can guess that the departures of my modified algorithm from what it’s likely to have encountered on the internet means that it’s unlikely to produce a correct output.
I work in biology, and it’s common to anthropomorphize biological systems to generate hypotheses and synthesize information about how such systems will behave. We also want to be able to translate this back into concrete scientific terms to check if it still makes sense. The Selfish Gene does a great job at managing this back and forth, and that’s part of why it’s such an illuminating classic.
I think it’s useful to promote a greater understanding of GPT-as-predictor, which is the literal and concrete truth, but it is better to integrate that with a more intuitive understanding of GPT-as-imitator, which is my default for figuring out how to work with the technology in practice.
If your working model is “GPT is an imitator” you won’t expect it to have any superhuman capabilities.
From the AI safety perspective having a model that assumes that GPT system won’t show superhuman capabilities when they can have those capabilities is problematic.
When people call GPT an imitator, it’s intended to guide us to accurate intuitions about how to predict its outputs. If I consider GPT as a token predictor, that does not help me very much to predict whether it will output computer code successfully implementing some obscure modified algorithm I’ve specified in natural language. If I consider it as an imitator, then I can guess that the departures of my modified algorithm from what it’s likely to have encountered on the internet means that it’s unlikely to produce a correct output.
I work in biology, and it’s common to anthropomorphize biological systems to generate hypotheses and synthesize information about how such systems will behave. We also want to be able to translate this back into concrete scientific terms to check if it still makes sense. The Selfish Gene does a great job at managing this back and forth, and that’s part of why it’s such an illuminating classic.
I think it’s useful to promote a greater understanding of GPT-as-predictor, which is the literal and concrete truth, but it is better to integrate that with a more intuitive understanding of GPT-as-imitator, which is my default for figuring out how to work with the technology in practice.
If your working model is “GPT is an imitator” you won’t expect it to have any superhuman capabilities. From the AI safety perspective having a model that assumes that GPT system won’t show superhuman capabilities when they can have those capabilities is problematic.
I agree to some extent—that specific sentence makes it sound dumb. “GPT can imitate you at your best” gets closer to the truth.
I don’t see why we should believe that “GPT can imitate you at your best” is an upper bound.
I didn’t say it was.