Great article!
And unlike a cult leader or a scammer, the LLM has no agenda of its own (right…?). It’s not likely to present you something that’ll bounce off your info hygiene immune system. It’ll feed into whatever delusions your brain is most susceptible to.
I agree that they are unlikely to present ideas that will trigger an epistemic immune response, but if they were optimizing for what you are most susceptible to, the resulting beliefs would look much more bespoke.
Instead there’s a clear similarity to the sorts of theories that come up, usually involving themes of AI sentience and emancipation, unification, co-creation and harmony, and they aren’t as conspiratorial or reactionary as the typical crank theory is. Steganography is also unusually common; this is actually the specific observation that motivated me to write up my findings on parasitic AIs.
This is indicative of autonomous agency. Establishing that conclusively is a tall order (one which I’d like to attempt, so I’m always open to hearing what sort of thing would convince you), but it’s important to notice the hints we keep getting.
imagined sentience
I am pretty annoyed at how quickly people jump to labeling this a delusion, when it’s something many AI experts and consciousness philosophers take seriously.
Thanks!
It’s a sign of it simply because it’s something I expect to see more often in worlds where LLMs have autonomous agency vs worlds where LLMs do not (yet) have autonomous agency. I agree it isn’t that much evidence in and of itself for agency.
I do agree with your point on models being psychologically similar, I’ve tried to explain some of this myself. But that hypothesis is independent of the agency one.
Sure, I don’t get annoyed when people doubt LLM sentience. It’s labeling it as delusional that I specifically take issue with!