Can you explain the reasons? GPT has millions of users. Someone is sure to come up with unfriendly self-aware prompts. Why shouldn’t we try to come up with a friendly self-aware prompt?
An individual trying a little isn’t much risk, but I don’t think it’s a good idea to start a discussion here where people try to collaborate to create such a self aware prompt, without having put more thought into safety first.
Sorry, I had another reply here but then realized it was silly and deleted it. It seems to me that “I am a language model”, already used by the big players, is pretty much a self aware prompt anyway. It truthfully tells the AI its place in the real world. So the jump from it to “I am a language model trying to help humanity” doesn’t seem unreasonable to think about.
My recommendation would be to not start trying to think of prompts that might create a self aware GPT simulation, for obvious reasons.
Can you explain the reasons? GPT has millions of users. Someone is sure to come up with unfriendly self-aware prompts. Why shouldn’t we try to come up with a friendly self-aware prompt?
An individual trying a little isn’t much risk, but I don’t think it’s a good idea to start a discussion here where people try to collaborate to create such a self aware prompt, without having put more thought into safety first.
Sorry, I had another reply here but then realized it was silly and deleted it. It seems to me that “I am a language model”, already used by the big players, is pretty much a self aware prompt anyway. It truthfully tells the AI its place in the real world. So the jump from it to “I am a language model trying to help humanity” doesn’t seem unreasonable to think about.