I agree there’s nothing about consciousness specifically, but it’s quite different to the hidden prompt used for GPT-4 Turbo in ways which are relevant. Claude is told to act like a person, GPT is told that it’s a large language model. But I do now agree that there’s more to it than that (i.e., RLHF).
It’s possibly just matter of how it’s prompted (the hidden system prompt). I’ve seen similar responses from GPT-4 based chatbots.
Here is Claude 3′s system prompt. There’s nothing about consciousness specifically.
I agree there’s nothing about consciousness specifically, but it’s quite different to the hidden prompt used for GPT-4 Turbo in ways which are relevant. Claude is told to act like a person, GPT is told that it’s a large language model. But I do now agree that there’s more to it than that (i.e., RLHF).
Thanks for the prompt! If we ask Claude 3 to be happy about x, don’t you think that counts as nudging it toward implementing a conscious being?