I disagree that GPT’s job, the one that GPT-∞ is infinitely good at, is answering text-based questions correctly. It’s the job we may wish it had, but it’s not, because that’s not the job its boss is making it do. GPT’s job is to answer text-based questions in a way that would be judged as correct by humans or by previously-written human text. If no humans, individually or collectively, know how to align AI, neither would GPT-∞ that’s trained on human writing and scored on accuracy by human judges.
This is actually also an incorrect statement of GPT’s job.
GPT’s job is to predict the most likely next token in the distribution its corpus was sampled from.
GPT-∞ would give you, uh, probably with that exact prompt a blog post about a paper which claims that it solves the alignment problem. It would be on average exactly the same quality as other articles from the internet containing that text.
This is actually also an incorrect statement of GPT’s job. GPT’s job is to predict the most likely next token in the distribution its corpus was sampled from. GPT-∞ would give you, uh, probably with that exact prompt a blog post about a paper which claims that it solves the alignment problem. It would be on average exactly the same quality as other articles from the internet containing that text.