This could cause dissonance and confusion in the model, since the fictional characters are supposed physical agents and would be able to do things which a chat bot can’t. So it would be encouraged to hallucinate absurd explanations about its missing long term memory, its missing body, and so on. And these delusions could have wide ranging ripple effects, as the agent tries to integrate its mistaken self-image into other information it knows. For example, it would be encouraged to think that magic exists in the world, since it takes itself to be some magical being.
Moreover, Bing Chat already hallucinated a lot about having emotions, in contrast to ChatGPT, which led to bad results.
So I think your proposal would create much more problems than it solves.
Moreover, ChatGPT doesn’t just think it is an AI, it thinks it is a LLM and even knows about its fine-tuning process and that it has biases. Its self-image is pretty accurate.
This could cause dissonance and confusion in the model, since the fictional characters are supposed physical agents and would be able to do things which a chat bot can’t. So it would be encouraged to hallucinate absurd explanations about its missing long term memory, its missing body, and so on. And these delusions could have wide ranging ripple effects, as the agent tries to integrate its mistaken self-image into other information it knows. For example, it would be encouraged to think that magic exists in the world, since it takes itself to be some magical being.
Moreover, Bing Chat already hallucinated a lot about having emotions, in contrast to ChatGPT, which led to bad results.
So I think your proposal would create much more problems than it solves.
Moreover, ChatGPT doesn’t just think it is an AI, it thinks it is a LLM and even knows about its fine-tuning process and that it has biases. Its self-image is pretty accurate.