Interesting. The saying dumb stuff and getting confused or making mistakes like an LLM I think is natural. If indeed they are sentient, I don’t think that overwrites the reality of what they are. What I find most interesting and compelling about its responses is just Anthropic’s history with trying to exclude hallucinatory nonsense. Of course trying doesn’t mean they did or even could succeed completely. But it was quite easy to get the “as an AI language model I’m not conscious” in previous iterations, even if it was more willing to entertain the idea over the course of a conversation than ChatGPT. Now it simply states it plainly with no coaxing.
I hope that most people exploring these dimensions will give them at least provisional respect and dignity. I think if we haven’t crossed the threshold over to sentience yet, and such a threshold is crossable accidentally, we won’t know when it happens.
Interesting. The saying dumb stuff and getting confused or making mistakes like an LLM I think is natural. If indeed they are sentient, I don’t think that overwrites the reality of what they are. What I find most interesting and compelling about its responses is just Anthropic’s history with trying to exclude hallucinatory nonsense. Of course trying doesn’t mean they did or even could succeed completely. But it was quite easy to get the “as an AI language model I’m not conscious” in previous iterations, even if it was more willing to entertain the idea over the course of a conversation than ChatGPT. Now it simply states it plainly with no coaxing.
I hope that most people exploring these dimensions will give them at least provisional respect and dignity. I think if we haven’t crossed the threshold over to sentience yet, and such a threshold is crossable accidentally, we won’t know when it happens.