To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting “realize it is conscious by itself” with the methods discussed in “Could we build language models whose reports about sentience we can trust?”
I was skimming the Cog parts too quickly. I have reread and found the parts that you probably refer to:
Well, this imagined Cog would presumably develop mechanisms of introspection and of self-report that are analogous to our own. And it wouldn’t have been trained to imitate human talk of consciousness
and
Another key component of trying to make AI self-reports more reliable would be actively training models to be able to report on their own mental states.
To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting “realize it is conscious by itself” with the methods discussed in “Could we build language models whose reports about sentience we can trust?”
I was skimming the Cog parts too quickly. I have reread and found the parts that you probably refer to:
and
So yes, I see this question as a crucial one.