The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.
The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
Heck, Wikipedia is the prime example.
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.