Being able to perfectly imitate a Chimpanzee would probably also require superhuman intelligence. But such a system would still only be able to imitate chimpanzees. Effectively, it would be much less intelligent than a human. Same for imitating human text. It’s very hard, but the result wouldn’t yield large capabilities.
It depends on your ability to extract the information from the model. RLHF and instruction tuning are one such algorithm that allow certain capabaliities besides next-token prediction to be extracted from the model. I suspect many other search and extraction techniques will be found, which can leverage latent capabalities and understandings in the model that aren’t modelled in its’ text outputs.
I aware of just three methods to modify GPTs: In-context learning (prompting), supervised fine-tuning, reinforcement fine-tuning. The achievable effects seem rather similar.
There’s many other ways to search the network in the literature, such as Activation Vectors. And I suspect we’re just getting started on these sorts of search methods.
It depends on your ability to extract the information from the model. RLHF and instruction tuning are one such algorithm that allow certain capabaliities besides next-token prediction to be extracted from the model. I suspect many other search and extraction techniques will be found, which can leverage latent capabalities and understandings in the model that aren’t modelled in its’ text outputs.
This approach doesn’t seem to work with in-context learning. Then it is unclear whether fine-tuning could be more successful.
I think there are probably many approaches that don’t work.
I aware of just three methods to modify GPTs: In-context learning (prompting), supervised fine-tuning, reinforcement fine-tuning. The achievable effects seem rather similar.
There’s many other ways to search the network in the literature, such as Activation Vectors. And I suspect we’re just getting started on these sorts of search methods.