Note that human valorize successful brain hacking. We call them “insights”, “epiphanies” and “revelations”, but all they are is successful brain hacks. If you take a popular novel or a movie, for example, there will invariably be a moment when the protagonist hears some persuasive argument and changes their mind on the spot. Behold, a brain hack. We love it and eat it up.
Also note that it is easy to create a non-lethal scissor statement with just a few words, like this bluepilling vs redpilling debate from a couple of days ago: https://twitter.com/lisatomic5/status/1690904441967575040 . This seems to have a similar effect.
I guess the latter confirms your point about the power of social media, though without any bells and whistles of video, LLMs or bandwagoning.
I’m not sure how video, LLMs, or bandwagoning are bells and whistles in this specific context, the point I was trying to get at with each of these is that these dynamics are actually highly relevant to the current gameboard for AI safety. The issue is how the strength of the effect scales with enough people, which is exactly what it currently has. With the bandwagoning effect, particularly the graphs, I feel like I did a decent job rising well above and beyond the buzzword users, and the issue with LLM propaganda is that it helps create human propagandists iteratively, which is much more effective than the scaling from classic propaganda (I made a reference to the history of propaganda in the original version but that got edited out for the public version), but looking back on it it’s less clear how well I did with video element which I just touched on. The effect from video really is quite powerful, even if it’s already touched upon by buzzword-users and it’s actually not very relevant to LLMs.
The link is broken, I’d like to see that scissor statement. Examples are important for thinking about concepts.
Fixed the link, it has been discussed here several times already, the blue pill vs the red pill poll, just not as nearly-scissor statement.
I agree with your point that bandwagoning is a known exploit, and LLMs are a force multiplier there, compared to, say tweet bots. Assuming I got your point right. Get LLMs to generate and iterate the message in various media forms until something sticks is definitely a hazard. I guess my point is that this level of sophistication may not be necessary, as you said
The human brain is a kludge of spaghetti code, and it therefore follows that there will be exploitable “zero days” within most or all humans.
Suddenly it clicked and I realized what I was doing completely wrong. It was such a stupid mistake on my end, in retrospect:
I never mentioned anything about how public opinion causes regime change, tax noncompliance, military factors like popularity of wars and elite soldier/officer recruitment, and permanent cultural shifts like the 60s/70s that intergenerationally increase the risk of regime change/tax noncompliance/military popularity. Information warfare is a major factor driving government interest in AI.
I also didn’t mention anything about the power generated by steering elites vs. the masses, everything in this post was always about the masses unless stated otherwise, which it never was. And the masses always get steered, especially in democracies, and therefore is not interesting aside from people who care a ton about who wins elections. Whereas steering elites means steering people in AI safety, and all sorts of other places that are as distinct and elite relative to society as AI safety is.
A couple of points.
Note that human valorize successful brain hacking. We call them “insights”, “epiphanies” and “revelations”, but all they are is successful brain hacks. If you take a popular novel or a movie, for example, there will invariably be a moment when the protagonist hears some persuasive argument and changes their mind on the spot. Behold, a brain hack. We love it and eat it up.
Also note that it is easy to create a non-lethal scissor statement with just a few words, like this bluepilling vs redpilling debate from a couple of days ago: https://twitter.com/lisatomic5/status/1690904441967575040 . This seems to have a similar effect.
I guess the latter confirms your point about the power of social media, though without any bells and whistles of video, LLMs or bandwagoning.
I’m not sure how video, LLMs, or bandwagoning are bells and whistles in this specific context, the point I was trying to get at with each of these is that these dynamics are actually highly relevant to the current gameboard for AI safety. The issue is how the strength of the effect scales with enough people, which is exactly what it currently has. With the bandwagoning effect, particularly the graphs, I feel like I did a decent job rising well above and beyond the buzzword users, and the issue with LLM propaganda is that it helps create human propagandists iteratively, which is much more effective than the scaling from classic propaganda (I made a reference to the history of propaganda in the original version but that got edited out for the public version), but looking back on it it’s less clear how well I did with video element which I just touched on. The effect from video really is quite powerful, even if it’s already touched upon by buzzword-users and it’s actually not very relevant to LLMs.
The link is broken, I’d like to see that scissor statement. Examples are important for thinking about concepts.
Fixed the link, it has been discussed here several times already, the blue pill vs the red pill poll, just not as nearly-scissor statement.
I agree with your point that bandwagoning is a known exploit, and LLMs are a force multiplier there, compared to, say tweet bots. Assuming I got your point right. Get LLMs to generate and iterate the message in various media forms until something sticks is definitely a hazard. I guess my point is that this level of sophistication may not be necessary, as you said
Suddenly it clicked and I realized what I was doing completely wrong. It was such a stupid mistake on my end, in retrospect:
I never mentioned anything about how public opinion causes regime change, tax noncompliance, military factors like popularity of wars and elite soldier/officer recruitment, and permanent cultural shifts like the 60s/70s that intergenerationally increase the risk of regime change/tax noncompliance/military popularity. Information warfare is a major factor driving government interest in AI.
I also didn’t mention anything about the power generated by steering elites vs. the masses, everything in this post was always about the masses unless stated otherwise, which it never was. And the masses always get steered, especially in democracies, and therefore is not interesting aside from people who care a ton about who wins elections. Whereas steering elites means steering people in AI safety, and all sorts of other places that are as distinct and elite relative to society as AI safety is.