This is minimal evidence that it’s really a GPT-4. Hallucinating about a hypothetical GPT-4 is not at all hard for such a model (go ask Playground/ChatGPT about “GPT-4”), and it’s conditioning the response mentioning GPT-4 on like 10 search hits (3+7) any of which might mention the widespread speculation about Prometheus/Sydney being GPT-4. Even if it supposedly got that information from its prompt, the prompt can be hallucinated, and why would the prompt mention it being GPT-4? The ChatGPT prompt doesn’t mention it being GPT-3.5 or related to davinci-003, after all.
This is minimal evidence that it’s really a GPT-4. Hallucinating about a hypothetical GPT-4 is not at all hard for such a model (go ask Playground/ChatGPT about “GPT-4”), and it’s conditioning the response mentioning GPT-4 on like 10 search hits (3+7) any of which might mention the widespread speculation about Prometheus/Sydney being GPT-4. Even if it supposedly got that information from its prompt, the prompt can be hallucinated, and why would the prompt mention it being GPT-4? The ChatGPT prompt doesn’t mention it being GPT-3.5 or related to
davinci-003
, after all.