Thanks for this Rohin. I’ve been trying to raise awareness about the potential dangers persuasion/propaganda tools, but you are totally right that I haven’t actually done anything close to a rigorous analysis. I agree with what you say here that a lot of the typical claims being thrown around seem based more on armchair reasoning than hard data. I’d love to see someone really lay out the arguments and analyze them… My current take is that (some of) the armchair theories seem pretty plausible to me, such that I’d believe them unless the data contradicts. But I’m extremely uncertain about this.
I’ve been trying to raise awareness about the potential dangers persuasion/propaganda tools
I should note that there’s a big difference between “recommender systems cause polarization as a side effect of optimizing for engagement” and “we might design tools that explicitly aim at persuasion / propaganda”. I’m confident we could (eventually) do the latter if we tried to; the question is primarily whether we will try to and if we do what it’s effects will be.
My current take is that (some of) the armchair theories seem pretty plausible to me, such that I’d believe them unless the data contradicts.
Usually, for any sufficiently complicated question (which automatically includes questions about the impact of technologies used by billions of people, since people are so diverse), I think an armchair theory is only slightly better than a monkey throwing darts, so I’m more in the position of “yup, sounds plausible, but that doesn’t constrain my beliefs about what the data will show and medium quality data will trump the theory no matter how it comes out”.
I should note that there’s a big difference between “recommender systems cause polarization as a side effect of optimizing for engagement” and “we might design tools that explicitly aim at persuasion / propaganda”. I’m confident we could (eventually) do the latter if we tried to; the question is primarily whether we will try to and if we do what it’s effects will be.
Oh, then maybe we don’t actually disagree that much! I am not at all confident that optimizing for engagement has the side effect of increasing polarization. It seems plausible but it’s also totally plausible that polarization is going up for some other reason(s). My concern (as illustrated in the vignette I wrote) is that we seem to be on a slippery slope to a world where persuasion/propaganda is more effective and widespread than it has been historically, thanks to new AI and big data methods. My model is: Ideologies and other entities have always been using propaganda of various kinds, and there’s always been a race between improving propaganda tech and improving truth-finding tech, but we are currently in a big AI boom and in particular in a Big Data and Natural Language Processing boom, and this seems like it’ll be a big boost to propaganda tech, and unfortunately I can’t think of ways in which it will correspondingly boost truth-finding-ness across society, because while it can be used to make truth-finding tech maybe (e.g. prediction markets, fact-checkers, etc.) it seems like most people in practice just don’t want to adopt truth-finding tech. It’s true that we could design a different society/culture that used all this awesome new tech to be super truth-seeking and have a very epistemically healthy discourse, but it seems like we are not about to do that anytime soon, instead we are going in the opposite direction.
I think that story involves lots of assumptions I don’t immediately believe (but don’t disbelieve either):
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
Such people will quickly realize that AI will be very useful for this
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
The resulting AI system will in fact be very good at persuasion / propaganda
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
And probably there are a bunch of other assumptions I haven’t even thought to question.
I think it seems fine to raise the possibility and do more research (and for all I know CSET or GovAI has done this research) but at least under my beliefs the current action should not be “raise awareness”, it should be “figure out whether the assumptions are justified”.
I think it seems fine to raise the possibility and do more research (and for all I know CSET or GovAI has done this research) but at least under my beliefs the current action should not be “raise awareness”, it should be “figure out whether the assumptions are justified”.
That’s all I’m trying to do at this point, to be clear. Perhaps “raise awareness” was the wrong choice of phrase.
Re: the object-level points: For how I see this going, see my vignette, and my reply to steve. The bullet points you put here make it seem like you have a different story in mind. [EDIT: But I agree with you that it’s all super unclear and more research is needed to have confidence in any of this.]
That’s all I’m trying to do at this point, to be clear.
Excellent :)
For how I see this going, see my vignette, and my reply to steve.
(Link is broken, but I found the comment.) After reading that reply I still feel like it involves the assumptions I mentioned above.
Maybe your point is that your story involves “silos” of Internet-space within which particular ideologies / propaganda reign supreme. I don’t really see that as changing my object-level points very much but perhaps I’m missing something.
I was confusing, sorry—what I meant was, technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible? idk, something like that, in a way that made me wonder if you had a different story in mind. Going through them one by one:
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
This is already happening in 2021 and previous, in my story it happens more.
Such people will quickly realize that AI will be very useful for this
Again, this is already happening.
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
Plenty of people are already raising a moral outcry. In my story these people don’t succeed in getting it banned, but I agree the story could be wrong. I hope it is!
The resulting AI system will in fact be very good at persuasion / propaganda
Yep. I don’t have hard evidence, but intuitively this feels like the sort of thing today’s AI techniques would be good at, or at least good-enough-to-improve-on-the-state-of-the-art.
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
I think it won’t be built & deployed in such a way that collective epistemology is overall improved. Instead, the propaganda-fighting AIs will themselves have blind spots, to allow in the propaganda of the “good guys.” The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc. (I think what happened with the internet is precedent for this. In theory, having all these facts available at all of our fingertips should have led to a massive improvement in collective epistemology and a massive improvement in truthfulness, accuracy, balance, etc. in the media. But in practice it didn’t.) It’s possible I’m being too cynical here of course!
technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible?
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This is already happening in 2021 and previous, in my story it happens more.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
The first bit seems in tension with the second bit, no?
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.
Thanks for this Rohin. I’ve been trying to raise awareness about the potential dangers persuasion/propaganda tools, but you are totally right that I haven’t actually done anything close to a rigorous analysis. I agree with what you say here that a lot of the typical claims being thrown around seem based more on armchair reasoning than hard data. I’d love to see someone really lay out the arguments and analyze them… My current take is that (some of) the armchair theories seem pretty plausible to me, such that I’d believe them unless the data contradicts. But I’m extremely uncertain about this.
I should note that there’s a big difference between “recommender systems cause polarization as a side effect of optimizing for engagement” and “we might design tools that explicitly aim at persuasion / propaganda”. I’m confident we could (eventually) do the latter if we tried to; the question is primarily whether we will try to and if we do what it’s effects will be.
Usually, for any sufficiently complicated question (which automatically includes questions about the impact of technologies used by billions of people, since people are so diverse), I think an armchair theory is only slightly better than a monkey throwing darts, so I’m more in the position of “yup, sounds plausible, but that doesn’t constrain my beliefs about what the data will show and medium quality data will trump the theory no matter how it comes out”.
Oh, then maybe we don’t actually disagree that much! I am not at all confident that optimizing for engagement has the side effect of increasing polarization. It seems plausible but it’s also totally plausible that polarization is going up for some other reason(s). My concern (as illustrated in the vignette I wrote) is that we seem to be on a slippery slope to a world where persuasion/propaganda is more effective and widespread than it has been historically, thanks to new AI and big data methods. My model is: Ideologies and other entities have always been using propaganda of various kinds, and there’s always been a race between improving propaganda tech and improving truth-finding tech, but we are currently in a big AI boom and in particular in a Big Data and Natural Language Processing boom, and this seems like it’ll be a big boost to propaganda tech, and unfortunately I can’t think of ways in which it will correspondingly boost truth-finding-ness across society, because while it can be used to make truth-finding tech maybe (e.g. prediction markets, fact-checkers, etc.) it seems like most people in practice just don’t want to adopt truth-finding tech. It’s true that we could design a different society/culture that used all this awesome new tech to be super truth-seeking and have a very epistemically healthy discourse, but it seems like we are not about to do that anytime soon, instead we are going in the opposite direction.
I think that story involves lots of assumptions I don’t immediately believe (but don’t disbelieve either):
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
Such people will quickly realize that AI will be very useful for this
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
The resulting AI system will in fact be very good at persuasion / propaganda
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
And probably there are a bunch of other assumptions I haven’t even thought to question.
I think it seems fine to raise the possibility and do more research (and for all I know CSET or GovAI has done this research) but at least under my beliefs the current action should not be “raise awareness”, it should be “figure out whether the assumptions are justified”.
That’s all I’m trying to do at this point, to be clear. Perhaps “raise awareness” was the wrong choice of phrase.
Re: the object-level points: For how I see this going, see my vignette, and my reply to steve. The bullet points you put here make it seem like you have a different story in mind. [EDIT: But I agree with you that it’s all super unclear and more research is needed to have confidence in any of this.]
Excellent :)
(Link is broken, but I found the comment.) After reading that reply I still feel like it involves the assumptions I mentioned above.
Maybe your point is that your story involves “silos” of Internet-space within which particular ideologies / propaganda reign supreme. I don’t really see that as changing my object-level points very much but perhaps I’m missing something.
I was confusing, sorry—what I meant was, technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible? idk, something like that, in a way that made me wonder if you had a different story in mind. Going through them one by one:
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
This is already happening in 2021 and previous, in my story it happens more.
Such people will quickly realize that AI will be very useful for this
Again, this is already happening.
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
Plenty of people are already raising a moral outcry. In my story these people don’t succeed in getting it banned, but I agree the story could be wrong. I hope it is!
The resulting AI system will in fact be very good at persuasion / propaganda
Yep. I don’t have hard evidence, but intuitively this feels like the sort of thing today’s AI techniques would be good at, or at least good-enough-to-improve-on-the-state-of-the-art.
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
I think it won’t be built & deployed in such a way that collective epistemology is overall improved. Instead, the propaganda-fighting AIs will themselves have blind spots, to allow in the propaganda of the “good guys.” The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc. (I think what happened with the internet is precedent for this. In theory, having all these facts available at all of our fingertips should have led to a massive improvement in collective epistemology and a massive improvement in truthfulness, accuracy, balance, etc. in the media. But in practice it didn’t.) It’s possible I’m being too cynical here of course!
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.