it’d say “this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world’s most prominent medical journals”.
In that case, I don’t see the point. After all, anti-vaxxers don’t deny that there are prominent medical professionals who don’t agree with their position. They, however, suspect that said professionals are doing so due to a combination of biases and money from the vaccine industry.
But not all people in the audience would react like that to michaelkeenan’s example warning. Some people would presumably value being informed of authoritative sources contradicting a claim that vaccines cause autism.
(And if your objection went through for fact checking framed as contradiction reporting, why wouldn’t it go through for fact checking framed as fact checking? My mental model of an anti-vaxxer has them responding as negatively to being baldly contradicted as to being informed, “The Lancet says this is wrong”.)
The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.
In that case, I don’t see the point. After all, anti-vaxxers don’t deny that there are prominent medical professionals who don’t agree with their position. They, however, suspect that said professionals are doing so due to a combination of biases and money from the vaccine industry.
But not all people in the audience would react like that to michaelkeenan’s example warning. Some people would presumably value being informed of authoritative sources contradicting a claim that vaccines cause autism.
(And if your objection went through for fact checking framed as contradiction reporting, why wouldn’t it go through for fact checking framed as fact checking? My mental model of an anti-vaxxer has them responding as negatively to being baldly contradicted as to being informed, “The Lancet says this is wrong”.)
The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
Heck, Wikipedia is the prime example.
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.