From our state of knowledge about consciousness it’s indeed not impossible that modern LLMs are conscious. I wouldn’t say it’s likely, I definitely wouldn’t say that they are as likely to be conscious as uploaded humans. But the point stands. We don’t know for sure and we lack proper way to figure it out.
Previously we could’ve vaguely point towards Turing test, but we are past this stage now. Behavioral analysis of a model at this point is mostly unhelpful. A few tweaks can make the same LLM that previously confidently claimed not to be conscious, to swear that it’s conscious and is suffering. So what a current LLM says about the nature of its consciousness gives us about 0 bit of evidence.
This is another reason to stop making bigger models and spend a lot of time figuring out what we have already created. At some point we may create a conscious LLM, won’t be able to tell the difference and it would be a moral catastrophe.
From our state of knowledge about consciousness it’s indeed not impossible that modern LLMs are conscious. I wouldn’t say it’s likely, I definitely wouldn’t say that they are as likely to be conscious as uploaded humans. But the point stands. We don’t know for sure and we lack proper way to figure it out.
Previously we could’ve vaguely point towards Turing test, but we are past this stage now. Behavioral analysis of a model at this point is mostly unhelpful. A few tweaks can make the same LLM that previously confidently claimed not to be conscious, to swear that it’s conscious and is suffering. So what a current LLM says about the nature of its consciousness gives us about 0 bit of evidence.
This is another reason to stop making bigger models and spend a lot of time figuring out what we have already created. At some point we may create a conscious LLM, won’t be able to tell the difference and it would be a moral catastrophe.