I was confusing, sorry—what I meant was, technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible? idk, something like that, in a way that made me wonder if you had a different story in mind. Going through them one by one:
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
This is already happening in 2021 and previous, in my story it happens more.
Such people will quickly realize that AI will be very useful for this
Again, this is already happening.
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
Plenty of people are already raising a moral outcry. In my story these people don’t succeed in getting it banned, but I agree the story could be wrong. I hope it is!
The resulting AI system will in fact be very good at persuasion / propaganda
Yep. I don’t have hard evidence, but intuitively this feels like the sort of thing today’s AI techniques would be good at, or at least good-enough-to-improve-on-the-state-of-the-art.
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
I think it won’t be built & deployed in such a way that collective epistemology is overall improved. Instead, the propaganda-fighting AIs will themselves have blind spots, to allow in the propaganda of the “good guys.” The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc. (I think what happened with the internet is precedent for this. In theory, having all these facts available at all of our fingertips should have led to a massive improvement in collective epistemology and a massive improvement in truthfulness, accuracy, balance, etc. in the media. But in practice it didn’t.) It’s possible I’m being too cynical here of course!
technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible?
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This is already happening in 2021 and previous, in my story it happens more.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
The first bit seems in tension with the second bit, no?
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.
I was confusing, sorry—what I meant was, technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible? idk, something like that, in a way that made me wonder if you had a different story in mind. Going through them one by one:
People are very deliberately building persuasion / propaganda tech (as opposed to e.g. people like to loudly state opinions and the persuasive ones rise to the top)
This is already happening in 2021 and previous, in my story it happens more.
Such people will quickly realize that AI will be very useful for this
Again, this is already happening.
They will actually try to build it (as opposed to e.g. raising a moral outcry and trying to get it banned)
Plenty of people are already raising a moral outcry. In my story these people don’t succeed in getting it banned, but I agree the story could be wrong. I hope it is!
The resulting AI system will in fact be very good at persuasion / propaganda
Yep. I don’t have hard evidence, but intuitively this feels like the sort of thing today’s AI techniques would be good at, or at least good-enough-to-improve-on-the-state-of-the-art.
AI that fights persuasion / propaganda either won’t be built or will be ineffective (my unreliable armchair reasoning suggests the opposite; it seems to me like right now human fact-checking labor can’t keep up with human controversy-creating labor partly because humans enjoy the latter more than the former; this won’t be true with AI)
I think it won’t be built & deployed in such a way that collective epistemology is overall improved. Instead, the propaganda-fighting AIs will themselves have blind spots, to allow in the propaganda of the “good guys.” The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc. (I think what happened with the internet is precedent for this. In theory, having all these facts available at all of our fingertips should have led to a massive improvement in collective epistemology and a massive improvement in truthfulness, accuracy, balance, etc. in the media. But in practice it didn’t.) It’s possible I’m being too cynical here of course!
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.