(this comment is kind of a “i didn’t have time to write you a short letter so I wrote you a long one” situation)
re: Infowar between great powers—the view that China+Russia+USA invest a lot of efforts into infowar, but mostly “defensively” / mostly trying to shape domestic opinion, makes sense. (After all, it must be easier to control the domestic media/information lansdscape!) I would tend to expect that doing domestically-focused infowar stuff at a massive scale would be harder for the USA to pull off (wouldn’t it be leaked? wouldn’t it be illegal somehow, or at least something that public opinion would consider a huge scandal?), but on the other hand I’d expect the USA to have superior infowar technology (subtler, more effective, etc). And logically it might also be harder to percieve effects of USA infowar techniques, since I live in the USA, immersed in its culture.
Still, my overall view is that, although the great powers certainly expend substantial effort trying to shape culture, and have some success, they don’t appear to have any next-gen technology qualitatively different and superior to the rhetorical techniques deployed by ordinary successful politicians like Trump, social movements like EA or wokeism, advertising / PR agencies, media companies like the New York Times, etc. (In the way that, eg, engineering marvels like the SR-72 Blackbird were generations ahead of competitors’ capabilities.) So I think the overall cultural landscape is mostly anarchic—lots of different powers are trying to exert their own influence and none of them can really control or predict cultural changes in detail.
re: Social media companies’ RL algorithms are powerful but also “they probably couldn’t prevent algorithms from doing this if they tried due to goodharts law”. -- Yeah, I guess my take on this is that the overt attempts at propaganda (aimed at placating the NYT) seem very weak and clumsy. Meanwhile the underlying RL techniques seem potentially powerful, but poorly understood or not very steerable, since social media companies seem to be mostly optimizing for engagement (and not even always succeeding at that; here we are talking on LessWrong instead of tweeting / tiktoking), rather than deploying clever infowar superweapons. If they have such power, why couldn’t left-leaning sillicon valley prevent the election of Trump using subtle social-media-RL trickery? (Although I admit that the reaction to the 2016 election could certainly be interpreted as sillicon valley suddenly realizing, “Holy shit, we should definitely try to develop social media infowar superweapons so we can maybe prevent this NEXT TIME.” But then the 2020 election was very close—not what I’d have expected if info-superweapons were working well!)
With Twitter in particular, we’ve had such a transparent look at its operations during the handover to Elon Musk, and it just seems like both sides of that transaction have been pretty amateurish and lacked any kind of deep understanding of how to influence culture. The whole fight seems to have been about where to tug one giant lever called “how harshly do we moderate the tweets of leftists vs rightists”. This lever is indeed influential on twitter culture, and thus culture generally—but the level of sophistication here just seems pathetic.
Tiktok is maybe the one case where I’d be sympathetic to the idea that maybe a lot of what appears to be random insane trends/beliefs fueled by SGD algorithms and internet social dynamics, is actually the result of fairly fine-grained cultural influence by Chinese interests. I don’t think Tiktok is very world-changing right now (as we’d expect, it’s targeting the craziest and lowest-IQ people first), but it’s at least kinda world-changing, and maybe it’s the first warning sign of what will soon be a much bigger threat? (I don’t know much about the details of Tiktok the company, or the culture of its users, so it’s hard for me to judge how much fine-grained control China might or might not be exerting.)
Unrelated—I love the kind of sci-fi concept of “people panic but eventually go back to using social media and then they feel fine (SGD does this automatically in order to retain users)”. But of course I think that the vast majority of users are in the “aren’t panicking” / never-think-about-this-at-all category, and there are so few people in the “panic” category (panic specifically over subtle persuasion manipulation tech that isn’t just trying to maximize engagement but instead achieve some specific ideological outcome, I mean) that there would be no impact on the social-media algorithms. I think it is plausible that other effects like “try not to look SO clickbaity that users recognize the addictiveness and leave” do probably show up in algorithms via SGD.
More random thoughts about infowar campaigns that the USA might historically have wanted to infowar about:
Anti-communism during the cold war, maybe continuing to a kind of generic pro-corporate / pro-growth attitude these days. (But lots of people were pro-communist back in the day, and remain anti-corporate/anti-growth today! And even the republican party is less and less pro-business… their basic model isn’t to mind-control everyone into becoming fiscal conservatives, but instead to gain power by exploiting the popularity of social conservativism and then use power to implement fiscal conservativism.)
Maybe I am taking a too-narrow view of infowar as “the ability to change peoples’ minds on individual issues”, when actually I should be considering strategies like “get people hyped up about social issues in order to gain power that you can use for economic issues” as a successful example of infowar? But even if I consider this infowar, then it reinforces my point that the most advanced stuff today all seems to be variations on normal smart political strategy and messaging, not some kind of brand-new AI-powered superweapon for changing people’s minds (or redirecting their focus or whatever) in a radically new way.
Since WW2, and maybe continuing to today, the West has tried to ideologically immunize itself against Nazi-ism. This includes a lot of trying to teach people to reject charismatic dictators, to embrace counterintuitive elements of liberalism like tolerance/diversity, and even to deny inconvenient facts like racial group differences for the sake of social harmony. In some ways this has gone so well that we’re getting problems from going too far in this direction (wokism), but in other ways it can often feel like liberalism is hanging on by a thread and people are still super-eager to embrace charismatic dictators, incite racial conflict, etc.
“Human brains are extremely predisposed to being hacked, governments would totally do this, and the AI safety community is unusually likely to be targeted.” —yup, fully agree that the AI safety community faces a lot of peril navigating the whims of culture and trying to win battles in a bunch of diverse high-stakes environments (influencing superpower governments, huge corporations, etc) where they are up against a variety of elite actors with some very strong motivations. And that there is peril both in the difficulty of navigating the “conventional” human-persuasion-transformed social landscape of today’s world (already super-complex and difficult) and the potentially AI-persuasion-transformed world of tomorrow. I would note though, that these battles will (mostly?) play out in pretty elite spaces, wheras I’d expect the power of AI information superweapons to have the most powerful impact on the mass public. So, I’d expect to have at least some warning in the form of seeing the world go crazy (in a way that seems different from and greater than today’s anarchic internet-social-dynamics-driven craziness), before I myself went crazy. (Unless there is an AI-infowar-superweapon-specific hard-takeoff where we suddenly get very powerful persuasion tech but still don’t get the full ASI singularity??)
re: Dath Ilan—this really deserves a whole separate comment, but basically I am also a big fan of the concept of Dath Ilan, and I would love to hear your thoughts on how you would go about trying to “build Dath Ilan” IRL.
What should an individual person, acting mostly alone, do to try and promote a more Dath-Ilani future? Try to practice & spread Lesswrong-style individual-level rationality, maybe (obviously Yudkowsky did this with Lesswrong and other efforts). Try to spread specific knowledge about the way society works and thereby build energy for / awareness of ways that society could be improved (inadequate equilibria kinda tries to do this? seems like there could be many approaches here). Personally I am also always eager to talk to people about specific institutional / political tweaks that could lead to a better, more Dath-Ilani world: georgism, approval voting, prediction markets, charter cities, etc. Of those, some would seem to build on themselves while others wouldn’t—what ideas seem like the optimal, highest-impact things to work on? (If the USA adopted georgist land-value taxes, we’d have better land-use policy and faster economic growth but culture/politics wouldn’t hugely change in a broadly Dath-Ilani direction; meanwhile prediction markets or new ways of voting might have snowballing effects where you get the direct improvement but also you make culture more rational & cooperative over time.)
What should a group of people ideally do? (Like, say, an EA-adjacent silicon valley billionaire funding a significant minority of the EA/rationalist movement to work on this problem together in a coordinated way.) My head immediately jumps to “obviously they should build a rationalist charter city”:
The city doesn’t need truly nation-level sovereign autonomy, the goal would just be to coordinate enough people to move somewhere together a la the Free State Project, gaining enough influence over local government to be able to run our own policy experiments with things like prediction markets, georgism, etc. (Unfortunately some things, like medical research, are federally regulated, but I think you could do a lot with just local government powers + creating a critical mass of rationalist culture.)
Instead of moving to a random small town and trying to take over, it might be helpful to choose some existing new-city project to partner with—like California Forever, Telosa, Prospera, whatever Zuzalu or Praxis turn into, or other charter cities that have amenable ideologies/goals. (This would also be very helpful if you don’t have enough people or money to create a reasonably-sized town all by yourself!)
The goal would be twofold: first, run a bunch of policy experiments and try to create Dath-Ilan-style institutions (where legal under federal law if you’re still in the USA, etc). And second, try to create a critical mass of rationalist / Dath Ilani culture that can grow and eventually influence… idk, lots of people, including eventually the leaders of other governments like Singapore or the UK or whatever. Although it’s up for debate whether “everyone move to a brand-new city somewhere else” is really a better plan for cultural influence than “everyone move to the bay area”, which has been pretty successful at influencing culture in a rationalist direction IMO! (Maybe the rationalist charter city should therefore be in Europe or at least on the East Coast or something, so that we mostly draw rationalists from areas other than the Bay Area. Or maybe this is an argument for really preferring California Forever as an ally, over and above any other new-city project, since that’s still in the Bay Area. Or for just trying to take over Bay Area government somehow.)
...but maybe a rationalist charter city is not the only or best way that a coordinated group of people could try to build Dath Ilan?
(this comment is kind of a “i didn’t have time to write you a short letter so I wrote you a long one” situation)
re: Infowar between great powers—the view that China+Russia+USA invest a lot of efforts into infowar, but mostly “defensively” / mostly trying to shape domestic opinion, makes sense. (After all, it must be easier to control the domestic media/information lansdscape!) I would tend to expect that doing domestically-focused infowar stuff at a massive scale would be harder for the USA to pull off (wouldn’t it be leaked? wouldn’t it be illegal somehow, or at least something that public opinion would consider a huge scandal?), but on the other hand I’d expect the USA to have superior infowar technology (subtler, more effective, etc). And logically it might also be harder to percieve effects of USA infowar techniques, since I live in the USA, immersed in its culture.
Still, my overall view is that, although the great powers certainly expend substantial effort trying to shape culture, and have some success, they don’t appear to have any next-gen technology qualitatively different and superior to the rhetorical techniques deployed by ordinary successful politicians like Trump, social movements like EA or wokeism, advertising / PR agencies, media companies like the New York Times, etc. (In the way that, eg, engineering marvels like the SR-72 Blackbird were generations ahead of competitors’ capabilities.) So I think the overall cultural landscape is mostly anarchic—lots of different powers are trying to exert their own influence and none of them can really control or predict cultural changes in detail.
re: Social media companies’ RL algorithms are powerful but also “they probably couldn’t prevent algorithms from doing this if they tried due to goodharts law”. -- Yeah, I guess my take on this is that the overt attempts at propaganda (aimed at placating the NYT) seem very weak and clumsy. Meanwhile the underlying RL techniques seem potentially powerful, but poorly understood or not very steerable, since social media companies seem to be mostly optimizing for engagement (and not even always succeeding at that; here we are talking on LessWrong instead of tweeting / tiktoking), rather than deploying clever infowar superweapons. If they have such power, why couldn’t left-leaning sillicon valley prevent the election of Trump using subtle social-media-RL trickery?
(Although I admit that the reaction to the 2016 election could certainly be interpreted as sillicon valley suddenly realizing, “Holy shit, we should definitely try to develop social media infowar superweapons so we can maybe prevent this NEXT TIME.” But then the 2020 election was very close—not what I’d have expected if info-superweapons were working well!)
With Twitter in particular, we’ve had such a transparent look at its operations during the handover to Elon Musk, and it just seems like both sides of that transaction have been pretty amateurish and lacked any kind of deep understanding of how to influence culture. The whole fight seems to have been about where to tug one giant lever called “how harshly do we moderate the tweets of leftists vs rightists”. This lever is indeed influential on twitter culture, and thus culture generally—but the level of sophistication here just seems pathetic.
Tiktok is maybe the one case where I’d be sympathetic to the idea that maybe a lot of what appears to be random insane trends/beliefs fueled by SGD algorithms and internet social dynamics, is actually the result of fairly fine-grained cultural influence by Chinese interests. I don’t think Tiktok is very world-changing right now (as we’d expect, it’s targeting the craziest and lowest-IQ people first), but it’s at least kinda world-changing, and maybe it’s the first warning sign of what will soon be a much bigger threat? (I don’t know much about the details of Tiktok the company, or the culture of its users, so it’s hard for me to judge how much fine-grained control China might or might not be exerting.)
Unrelated—I love the kind of sci-fi concept of “people panic but eventually go back to using social media and then they feel fine (SGD does this automatically in order to retain users)”. But of course I think that the vast majority of users are in the “aren’t panicking” / never-think-about-this-at-all category, and there are so few people in the “panic” category (panic specifically over subtle persuasion manipulation tech that isn’t just trying to maximize engagement but instead achieve some specific ideological outcome, I mean) that there would be no impact on the social-media algorithms. I think it is plausible that other effects like “try not to look SO clickbaity that users recognize the addictiveness and leave” do probably show up in algorithms via SGD.
More random thoughts about infowar campaigns that the USA might historically have wanted to infowar about:
Anti-communism during the cold war, maybe continuing to a kind of generic pro-corporate / pro-growth attitude these days. (But lots of people were pro-communist back in the day, and remain anti-corporate/anti-growth today! And even the republican party is less and less pro-business… their basic model isn’t to mind-control everyone into becoming fiscal conservatives, but instead to gain power by exploiting the popularity of social conservativism and then use power to implement fiscal conservativism.)
Maybe I am taking a too-narrow view of infowar as “the ability to change peoples’ minds on individual issues”, when actually I should be considering strategies like “get people hyped up about social issues in order to gain power that you can use for economic issues” as a successful example of infowar? But even if I consider this infowar, then it reinforces my point that the most advanced stuff today all seems to be variations on normal smart political strategy and messaging, not some kind of brand-new AI-powered superweapon for changing people’s minds (or redirecting their focus or whatever) in a radically new way.
Since WW2, and maybe continuing to today, the West has tried to ideologically immunize itself against Nazi-ism. This includes a lot of trying to teach people to reject charismatic dictators, to embrace counterintuitive elements of liberalism like tolerance/diversity, and even to deny inconvenient facts like racial group differences for the sake of social harmony. In some ways this has gone so well that we’re getting problems from going too far in this direction (wokism), but in other ways it can often feel like liberalism is hanging on by a thread and people are still super-eager to embrace charismatic dictators, incite racial conflict, etc.
“Human brains are extremely predisposed to being hacked, governments would totally do this, and the AI safety community is unusually likely to be targeted.”
—yup, fully agree that the AI safety community faces a lot of peril navigating the whims of culture and trying to win battles in a bunch of diverse high-stakes environments (influencing superpower governments, huge corporations, etc) where they are up against a variety of elite actors with some very strong motivations. And that there is peril both in the difficulty of navigating the “conventional” human-persuasion-transformed social landscape of today’s world (already super-complex and difficult) and the potentially AI-persuasion-transformed world of tomorrow. I would note though, that these battles will (mostly?) play out in pretty elite spaces, wheras I’d expect the power of AI information superweapons to have the most powerful impact on the mass public. So, I’d expect to have at least some warning in the form of seeing the world go crazy (in a way that seems different from and greater than today’s anarchic internet-social-dynamics-driven craziness), before I myself went crazy. (Unless there is an AI-infowar-superweapon-specific hard-takeoff where we suddenly get very powerful persuasion tech but still don’t get the full ASI singularity??)
re: Dath Ilan—this really deserves a whole separate comment, but basically I am also a big fan of the concept of Dath Ilan, and I would love to hear your thoughts on how you would go about trying to “build Dath Ilan” IRL.
What should an individual person, acting mostly alone, do to try and promote a more Dath-Ilani future? Try to practice & spread Lesswrong-style individual-level rationality, maybe (obviously Yudkowsky did this with Lesswrong and other efforts). Try to spread specific knowledge about the way society works and thereby build energy for / awareness of ways that society could be improved (inadequate equilibria kinda tries to do this? seems like there could be many approaches here). Personally I am also always eager to talk to people about specific institutional / political tweaks that could lead to a better, more Dath-Ilani world: georgism, approval voting, prediction markets, charter cities, etc. Of those, some would seem to build on themselves while others wouldn’t—what ideas seem like the optimal, highest-impact things to work on? (If the USA adopted georgist land-value taxes, we’d have better land-use policy and faster economic growth but culture/politics wouldn’t hugely change in a broadly Dath-Ilani direction; meanwhile prediction markets or new ways of voting might have snowballing effects where you get the direct improvement but also you make culture more rational & cooperative over time.)
What should a group of people ideally do? (Like, say, an EA-adjacent silicon valley billionaire funding a significant minority of the EA/rationalist movement to work on this problem together in a coordinated way.) My head immediately jumps to “obviously they should build a rationalist charter city”:
The city doesn’t need truly nation-level sovereign autonomy, the goal would just be to coordinate enough people to move somewhere together a la the Free State Project, gaining enough influence over local government to be able to run our own policy experiments with things like prediction markets, georgism, etc. (Unfortunately some things, like medical research, are federally regulated, but I think you could do a lot with just local government powers + creating a critical mass of rationalist culture.)
Instead of moving to a random small town and trying to take over, it might be helpful to choose some existing new-city project to partner with—like California Forever, Telosa, Prospera, whatever Zuzalu or Praxis turn into, or other charter cities that have amenable ideologies/goals. (This would also be very helpful if you don’t have enough people or money to create a reasonably-sized town all by yourself!)
The goal would be twofold: first, run a bunch of policy experiments and try to create Dath-Ilan-style institutions (where legal under federal law if you’re still in the USA, etc). And second, try to create a critical mass of rationalist / Dath Ilani culture that can grow and eventually influence… idk, lots of people, including eventually the leaders of other governments like Singapore or the UK or whatever. Although it’s up for debate whether “everyone move to a brand-new city somewhere else” is really a better plan for cultural influence than “everyone move to the bay area”, which has been pretty successful at influencing culture in a rationalist direction IMO! (Maybe the rationalist charter city should therefore be in Europe or at least on the East Coast or something, so that we mostly draw rationalists from areas other than the Bay Area. Or maybe this is an argument for really preferring California Forever as an ally, over and above any other new-city project, since that’s still in the Bay Area. Or for just trying to take over Bay Area government somehow.)
...but maybe a rationalist charter city is not the only or best way that a coordinated group of people could try to build Dath Ilan?