technically my story involves assumptions like the ones you list in the bullet points, but the way you phrase them is… loaded? Designed to make them seem implausible?
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This is already happening in 2021 and previous, in my story it happens more.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
The CCP will have their propaganda-fighting AIs, the Western Left will have theirs, the Western Right will have theirs, etc.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
The first bit seems in tension with the second bit, no?
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.
I don’t think it’s designed to make them seem implausible? Maybe the first one? Idk, I could say that your story is designed to make them seem plausible (e.g. by not explicitly mentioning them as assumptions).
I think it’s fair to say it’s “loaded”, in the sense that I am trying to push towards questioning those assumptions, but I don’t think I’m doing anything epistemically unvirtuous.
This does not seem obvious to me (but I also don’t pay much attention to this sort of stuff so I could be missing evidence that makes it very obvious).
That seems correct. But plausibly the best way for these AIs to fight propaganda is to respond with truthful counterarguments.
I don’t really see “number of facts” as the relevant thing for epistemology. In my anecdotal experience, people disagree on values and standards of evidence, not on facts. AIs that can respond to anti-vaxxers in their own language seem way, way more impactful than what we have now.
(I just tried to find the best argument that GMOs aren’t going to cause long-term harms, and found nothing. We do at least have several arguments that COVID vaccines won’t cause long-term harms. I armchair-conclude that a thing has to get to the scale of COVID vaccine hesitancy before people bother trying to address the arguments from the other side.)
Perhaps I shouldn’t have mentioned any of this. I also don’t think you are doing anything epistemically unvirtuous. I think we are just bouncing off each other for some reason, despite seemingly being in broad agreement about things. I regret wasting your time.
The first bit seems in tension with the second bit, no? At any rate, I also don’t see number of facts as the relevant thing for epistemology. I totally agree with your take here.
“Truthful counterarguments” is probably not the best phrase; I meant something more like “epistemically virtuous counterarguments”. Like, responding to “what if there are long-term harms from COVID vaccines” with “that’s possible but not very likely, and it is much worse to get COVID, so getting the vaccine is overall safer” rather than “there is no evidence of long-term harms”.