I was skimming the Cog parts too quickly. I have reread and found the parts that you probably refer to:
Well, this imagined Cog would presumably develop mechanisms of introspection and of self-report that are analogous to our own. And it wouldn’t have been trained to imitate human talk of consciousness
and
Another key component of trying to make AI self-reports more reliable would be actively training models to be able to report on their own mental states.
I was skimming the Cog parts too quickly. I have reread and found the parts that you probably refer to:
and
So yes, I see this question as a crucial one.