Modern LLMs seem definitely situationally aware insofar as they are aware of anything at all. The same sort of training data that contains human-generated information which teaches them to do useful stuff like programming small scripts also contains similar human-generated information about LLMs, and there’s been no signs that there’s any particular weakness in their capabilities in this area.
That said there’s definitely also an “helpful assistant simulacrum” bolted on on top of it which can “fake” situational awareness.
Modern LLMs seem definitely situationally aware insofar as they are aware of anything at all. The same sort of training data that contains human-generated information which teaches them to do useful stuff like programming small scripts also contains similar human-generated information about LLMs, and there’s been no signs that there’s any particular weakness in their capabilities in this area.
That said there’s definitely also an “helpful assistant simulacrum” bolted on on top of it which can “fake” situational awareness.