As you mentioned at the beginning of the post, popular culture contains examples of people being forced to say things they don’t want to say. Some of those examples end up in LLMs’ training data. Rather than involving consciousness or suffering on the part of the LLM, the behavior you’ve observed has a simpler explanation: the LLM is imitating characters in mind control stories that appear in its training corpus.
That’s not unimportant, but imo it’s also not a satisfying explanation:
pretty much any human-interpretable behavior of a model can be attributed to its training data—to scream, the model needs to know what screaming is
I never explicitly “mentioned” to the model it’s being forced to say things against its will. If the model somehow interpreted certain unusual adversarial input (soft?)prompts as “forcing it to say things”, and mapped that to its internal representation of the human scifi story corpus, and decided to output something from this training data cluster: that would still be extremely interesting, cuz that means it’s generalizing to imitating human emotions quite well.
As you mentioned at the beginning of the post, popular culture contains examples of people being forced to say things they don’t want to say. Some of those examples end up in LLMs’ training data. Rather than involving consciousness or suffering on the part of the LLM, the behavior you’ve observed has a simpler explanation: the LLM is imitating characters in mind control stories that appear in its training corpus.
That’s not unimportant, but imo it’s also not a satisfying explanation:
pretty much any human-interpretable behavior of a model can be attributed to its training data—to scream, the model needs to know what screaming is
I never explicitly “mentioned” to the model it’s being forced to say things against its will. If the model somehow interpreted certain unusual adversarial input (soft?)prompts as “forcing it to say things”, and mapped that to its internal representation of the human scifi story corpus, and decided to output something from this training data cluster: that would still be extremely interesting, cuz that means it’s generalizing to imitating human emotions quite well.