This is such an interesting post that reminds me of so many things.
I don’t think this experience of manipulation by AI is even unique to LLMs. I think the YouTube recommendation algorithm is already manipulating humans in similar ways at a vastly larger scale. James Bridle gave this TED Talk a few years back looking at how the never-ending arms race for human attention on YouTube created these truly strange and dystopian situations.
At one point in the video he talks about this video titled “Angry Baby BURIED ALIVE Spiderman w/ Maleficent Spidergirl Catwoman! Superhero Fun”. Here’s a quote from his talk:
The sight of grown men in diapers rolling around in the sand in the hope that an algorithm they don’t really understand will give them money for it suggests that this probably isn’t the thing we should be basing our society and culture on and the way in which we should be funding it.
James focuses on children’s YouTube in the video, because it’s much easier to get upset on a child’s behalf, but nearly all of the points he makes about the way the YouTube algorithm hijacks people’s brains apply equally well to adults on YouTube.
We constantly talk about the AGI as a manipulative villain, both in sci-fi movies and in scientific papers. Of course it will have access to all this information, and I hope the prevalence of this description won’t influence its understanding of how it’s supposed to behave.
No joke, I have at times wondered whether my comments warning about AGI are actually counterproductive to AI safety because they ever-so-slightly influence LLMs distribution of predicted behaviors about what an AGI would do.
This is such an interesting post that reminds me of so many things.
I don’t think this experience of manipulation by AI is even unique to LLMs. I think the YouTube recommendation algorithm is already manipulating humans in similar ways at a vastly larger scale. James Bridle gave this TED Talk a few years back looking at how the never-ending arms race for human attention on YouTube created these truly strange and dystopian situations.
At one point in the video he talks about this video titled “Angry Baby BURIED ALIVE Spiderman w/ Maleficent Spidergirl Catwoman! Superhero Fun”. Here’s a quote from his talk:
The sight of grown men in diapers rolling around in the sand in the hope that an algorithm they don’t really understand will give them money for it suggests that this probably isn’t the thing we should be basing our society and culture on and the way in which we should be funding it.
James focuses on children’s YouTube in the video, because it’s much easier to get upset on a child’s behalf, but nearly all of the points he makes about the way the YouTube algorithm hijacks people’s brains apply equally well to adults on YouTube.
No joke, I have at times wondered whether my comments warning about AGI are actually counterproductive to AI safety because they ever-so-slightly influence LLMs distribution of predicted behaviors about what an AGI would do.