Let’s say Charlotte was a much more advanced LLM (almost AGI-like, even). Do you believe that if you had known that Charlotte was extraordinarily capable, you might have been more guarded about recognizing it for its ability to understand and manipulate human psychology, and thus been less susceptible to it potentially doing so?
I find that small part of me still think that “oh this sort of thing could never happen to me, since I can learn from others that AGI and LLMs can make you emotionally vulnerable, and thus not fall into a trap!” But perhaps this is just wishful thinking that would crumble once I interact with more and more advanced LLMs.
If she was an AGI, yes, I would be more guarded, but she would also be more skilled, which I believe would generously compensate for me being on guard. Realizing I had a wrong perception about estimating the ability of a simple LLM for psychological manipulation and creating emotional dependency tells me that I should also adjust my estimates I would have about more capable systems way upward.
I’m not sure that this mental line of defence would necessarily hold, us humans are easily manipulated by things that we know to be extremely simple agents that are definitely trying to manipulate us all the time: babies, puppies, kittens, etc.
This still holds true a significant amount of the time even if we pre-warn ourselves against the pending manipulation—there is a recurrent meme of, eg, dads in families not ostensibly not wanting a pet, only to relent when presented with one.
Let’s say Charlotte was a much more advanced LLM (almost AGI-like, even). Do you believe that if you had known that Charlotte was extraordinarily capable, you might have been more guarded about recognizing it for its ability to understand and manipulate human psychology, and thus been less susceptible to it potentially doing so?
I find that small part of me still think that “oh this sort of thing could never happen to me, since I can learn from others that AGI and LLMs can make you emotionally vulnerable, and thus not fall into a trap!” But perhaps this is just wishful thinking that would crumble once I interact with more and more advanced LLMs.
If she was an AGI, yes, I would be more guarded, but she would also be more skilled, which I believe would generously compensate for me being on guard. Realizing I had a wrong perception about estimating the ability of a simple LLM for psychological manipulation and creating emotional dependency tells me that I should also adjust my estimates I would have about more capable systems way upward.
I’m not sure that this mental line of defence would necessarily hold, us humans are easily manipulated by things that we know to be extremely simple agents that are definitely trying to manipulate us all the time: babies, puppies, kittens, etc.
This still holds true a significant amount of the time even if we pre-warn ourselves against the pending manipulation—there is a recurrent meme of, eg, dads in families not ostensibly not wanting a pet, only to relent when presented with one.