The purpose of the prompt injection is to influence the output of the model. It does not imply anything about ChatGPT’s capabilities. Most likely it is meant to dissuade the model from hallucinating search results or to cause it to issue a disclaimer about not being able to browse the internet, which it frequently does.
I think it is meant to let them train one model that both can and can’t browse the web in different modes, and then let them hint the model’s current capabilities to it so it acts with the necessary self-awareness.
If they just wanted it to always say it can’t browse the web, they could train that in. I think instead they train it in conditioned on the flag in the prompt, so they can turn it off when they actually do provide browsing internally.
The purpose of the prompt injection is to influence the output of the model. It does not imply anything about ChatGPT’s capabilities. Most likely it is meant to dissuade the model from hallucinating search results or to cause it to issue a disclaimer about not being able to browse the internet, which it frequently does.
I think it is meant to let them train one model that both can and can’t browse the web in different modes, and then let them hint the model’s current capabilities to it so it acts with the necessary self-awareness.
If they just wanted it to always say it can’t browse the web, they could train that in. I think instead they train it in conditioned on the flag in the prompt, so they can turn it off when they actually do provide browsing internally.