to make a public list of journalists who are or aren’t trustworthy
This doesn’t work, as you don’t know if the list (or its creators) are trustworthy. This is a smaller version of something which is an unsolvable problem (because you need an absolute reference point but only have relative reference points) An authority can keep an eye on everything under its control, but it cannot keep an eye on itself. “Who watches the watchers?”. This is why a ministry of truth is a bad idea and why misinformation is impossible to combat. It’s tempting to say that openness of information is a solution (that if everyone can voice their opinions, observers can come to a sound conclusion themselves), and while this does end better, you don’t know if, for instance, a review site is deleting user reviews or not. (I just realized this is why people value transparency. But you don’t know if a seemingly transparent entity is actually transparent or just pretending to be. You can use technology which is fair or secure by design, but authorities (like the government) always make sure that this technology can’t exist.
The convenient thing about journalism is that the problems we’re worried about here are public, so you don’t need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it’s accused of.
The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers’ identifies you might still get a good enough signal to noise ratio, especially if you include positive ‘reviews’.
You can still lie by omission, allowing evidence that shows person A’s wrongdoings, while refuting evidence that shows either person A’s examples of trustworthiness, or person B’s wrongdoings.
If I do 10 things, 8 of which are virtuous and 2 of which are bad, and you only communicate the two to the world, then you will have deceived your listeners. Meanwhile, if another person does 8 things which are bad and 2 which are virtuous, you could share those two things. One-sidedness can be harmful and biased without ever lying (negative people tend to be in this group I think, especially if they’re intelligent)
A lot of online review sites are biased, despite essentially being designed to represent regular people rather than some authority which might lie to you. They silently delete reviews, selectively accuse reviews of breaking rules (holding a subset of them to a much higher standard, or claiming that reviews are targeted harassment by some socially unappealing group), adding fake votes themselves, etc.
We can’t solve all problems with journalism, but I hope we could at least solve the narrow problem of “I said X, journalist reported that I said Y”. (Such thing happened to me, too.)
With other problems, at least I can take some lesson about how to talk to journalists more carefully the next time. Perhaps I should shut up and refuse to comment on things where I don’t have a 100% certainty, because the journalist will make me sound 100% certain. Perhaps I shouldn’t provide a list of 8 good things and 2 bad things, because the journalist will only report the bad things; I should instead only mention the one or two most relevant things. Etc.
But if I say X and the journalist writes Y, there is nothing I can do to protect against this kind of problem (other than not talking to journalists at all).
This doesn’t work, as you don’t know if the list (or its creators) are trustworthy.
Yes, so you could do this within an organization or a community where you generally trust the other members. To avoid even exaggeration or similar, the best way would be for the complainer to provide an exact quote of what they said, and an exact quote of what was reported.
Which would require recording the words you tell to the journalist, which is probably a good idea. (Check your local laws, whether you need to warn the journalist about this, or you can simply do it without them knowing.)
There is a worry about deepfakes, so even this would only work in a community where people trust each other.
If the journalist acts in good faith, I think you will be alright. If not, there’s nothing you can do, whatsoever.
Coming up with reasons is almost too easy: 1: The journalist can write an article about you even if you’ve never talked to them 2: A journalist can start out trustworthy and then change for the worse (most untrustworthy authorities today grew powerful by being trustworthy. Now that they’ve created their public image of impartiality and fairness, they can burn it for years. Examples include Google and Wikipedia) 3: If you record me saying “I wouldn’t say I’m very interested in cars”, you just cut out the first part of the video, and now you have me saying “I’m very interested in cars”. If I quote another person, “X said that Y people are bad”, you could cut out the part of me saying “Y people are bad”. The deeper and more complex a subject you can get me to talk about, the easier it would be to take me out of context. Making Jordan Peterson look bad is trivial for instance. 4: Even if you have evidence that your words were twisted, you’ll lose if your evidence can’t reach other people. So if your values don’t align with the average journalist, or if your reputation is bad, you might find yourself relying on getting the word out by having a social media post go viral or something.
Personally, if I see a journalist or website treating anyone unfairly, I make a mental check that they’re inherently untrustworthy. I’d contact such people only if they had a stake in releasing my story (so that our goals align). As you may imagine, my standards result in me not bothering with about 90% of society. I rarely attempt to solve problems like this, because I have solved them in the past and realized that the solution is actually unwanted (that many things are flawed on purpose, and not because they lack intelligent people to help them fix them)
Things would be better if society as a whole valued truthfulness, and if winning directly (rather than with underhanded tricks) was associated with higher social status. These are the upstream chances I’d like to see in the world
If the journalist acts in good faith, I think you will be alright. If not, there’s nothing you can do, whatsoever.
That’s wrong and ignores why journalists chose to write about people and the constraints under which journalists are operating.
For most people who are on LessWrong an who might be interviewed by journalists, they would be interviewed because they can be presented as an expert on a subject. If they aren’t talking to the journalist, the journalist will usually try to find another expert to talk to them.
Right, a constraint is power. This constraint is actually the most important. In case of a power imbalance though, there’s nothing the weaker party can really do but to rely on the good-will of the other party. It’s their choice how things work out, to the extent that the game board favors them.
If the journalist isn’t too powerful, and if they benefit from listening to you, and they’re not entirely obsessed about pushing a narrative which goes against your interests or knowledge, then things are favorable and more likely to turn out well.
My argument is that we can consider these things (power difference, alignment of views, the good/bad faith of the journalist in question, etc) as parameters, and that the outcome depends entirely on these parameters, and not on the things that we pretend to be important.
Is it for instance good advice to say “Word yourself carefully so that you cannot be misinterpreted?” For how much effort Jordan Peterson put into this, it didn’t do much to help his reputation.
Reputation, power and interests matter, they are the real factors. Things like honesty, truthfulness, competence and morality are the things that we pretend matter, and it’s even a rule that we must pretend they matter, as breaking the forth wall (as I’m doing here) is considered bad taste. But the pretend-game gets in the way of thinking clearly. And I think this “advice for journalists” post was submitted in the first place because somebody noticed that the game being played didn’t align with what it was “supposed” to be. The reason they noticed is because journalists aren’t putting much effort into their deception anymore, which is because the balance of power has been skewed so much
Generally, some of the ideas here are still potentially useful, they just don’t get you any guarantees.
When I say “There’s nothing you can do about journalists screwing you over” I mean it like “There’s nothing you can do about the police screwing you over”. In 90% of cases, you probably won’t be screwed over, but the distribution of power makes it easy for them to make things difficult for you if they hate you enough. Another example is “Unprotected WIFI isn’t secure”, you can use McDonalds internet for your online banking for years without being hacked, so in practice you’re only a little insecure, but the statement “It’s insecure” just means “Whether or not you’re safe no longer depends on yourself, but on other peoples intentions”.
From this perspective, I’m warning against something which may not even happen. But it’s merely because a bad actor could exploit these attack vectors. I’m also speaking very generally, in a larger scope than just Lesswrong users talking to journalists. This probably adds to the feeling of our conversations being disconnected.
But I will have to disagree about nobody being naive. When two entities interact, and one of the entities is barely making an effort in pleasing the other party, it’s because of a difference in power. A small company may go out of its way to help you if you call its customer support line, whereas even getting in touch with a website like Facebook (unless its through the police) is genuinely hard.
The content says “Journalists exist to help us understand the world. But if you are a journalist, you have to be good enough to deserve the name” Which seems to mean “If you’re going to trade, you need to provide something of value yourself, like offering a service”. I think this is true for journalists as individuals, but not for companies which employ journalists. If these people won’t treat you with respect, it’s because they don’t have to, and arguing with them is entirely pointless, even if you’re right. Nothing but power will guarantee a difference, and if a journalist treats you kindly it’s probably because they have integrity (which is one of the forces capable of resisting Moloch).
Repeating myself a bit here, but hopefully made my position clearer in the process.
If you record me saying “I wouldn’t say I’m very interested in cars”, you just cut out the first part of the video, and now you have me saying “I’m very interested in cars”.
This is exactly an example where if you also record the conversation, and then write a short post saying “I said this …, he reported that …, listen for yourself here …”, this should make me dramatically lose credibility among anyone who knows you. (Plus a small chance of your article getting viral. Or at least anytime anyone mentions my name in the future, someone else can link your article in reply.)
Also, if e.g. everyone in the rationalist community started doing this, we could collectively keep one wiki page containing all of this. (A page with more examples is a more useful resource.) And every rationalist who doesn’t have previous experience with journalists could easily look up a name there.
But things like that happen all the time, and most things that people know about most topics are superficial, meaning that they’ve only heard the accusations, and that they’re only going to encounter the correction if they care to have a conversation about the topic. If the topic is politically biased, and these people spend time in politically biased communities, then it’s unlikely that anyone is going to show them the evidence that they’re wrong. You’re not incorrect, but think about the ratio of rationalists to non-rationalists. The reach of the media vs the amount of people who will bother to correct people who don’t know the full story.
It would also be easy for the website in question to say “You’re been accused of doing X, which is bad. We don’t tolerate bad behaviour on your platform” and ban you before you get to defend yourself. If the misunderstanding is bad enough, online websites can simply decide that even talking about you, or “defending you” is a sign of bad behaviour (I think this sort of happened to Kanye West because we had a manic episode in which he communicated things which are hard to understand and easy to misunderstand)
we could collectively keep one wiki page containing all of this
There’s a Wikipedia page on “Gamergate”, written largely by people who don’t know what happened. And there’s a “Gamergate Wiki” with tons of information (44 pages) with every detail documented in chronological order. I want to ask you two questions about this Wiki with the “other side of the story”:
1: Have you ever heard of it? 2: Can you even find it? (the only link I have myself is an archived page)
By coincidence, 1 yes, but 2 no. And yes, that is a good example of how one side of the debate was nuked from the entire internet, which many people would believe impossible.
(Could you please send me the link in a private message?)
Solutions do not have to be perfect to be useful. Trust can be built up over time.
misinformation is impossible to combat
If you take the US government who at the same time tells Facebook not to delete the anti-vaccine misinformation that the US government is spreading while telling Facebook to delete certain anti-vaccine misinformation that the US government doesn’t like, it’s obvious that the institutions aren’t trustworthy and thus they have a hard time fighting misinformation.
If the US government would stop lying, it would find it a lot easier to fight misinformation.
Outside of the government, nobody tries to fund an organization that values truth and that has a mission to fight misinformation with sufficient capital. All the organizations that have “fighting misinformation” on their banner are highly partisan and not centered around valuing truth.
The fact that nobody in the media asked Kamala Harris about whether Joe Biden was wrong to spread antivax misinformation as commander in chief, tell you a lot about how much the mainstream media cares about misinformation.
“Fighting misinformation” often means “Rejecting views which goes against ones own political narrative”. Scientists care the most about truth, and they aren’t afraid of challenging and questioning what is known. People heavily invested in politics don’t actually care all the much about truth, they just pretend to do so because it sounds noble. The “truthseeking” kind of person is a bit of a weirdo, not a lot of them exists.
It’s sad that the problem has gotten bad enough that even people on here have recognized it, but It’s nice not seeing comments which essentially say “Official sources are untrustworthy? That’s a bold claim. Give me evidence, from a source that I consider official and trustworthy, of course.”
But I really want to point out that it’s mathematically impossible to combat falsehood, and that it doesn’t matter if you call it “misinformation” or “disinformation”. The very approach fundamentally misunderstands how knowledge works.
1: Science is about refining our understanding, which means challenging it rather than attacking anyone who disagrees. It must be “open to modification” rather than “closed”. 2: It’s impossible to know if there’s any unknown unknowns that one is missing. Absolute certaincy does not seem to exist in knowledge. 3: Many things depend on definitions, which are arbitrary. “Is X a mental illness?” is decided not by X, but by a persons relationship to X. 4: Any conversation which has some intellectual weight is going to be difficult, and unless you can understand what the other person is saying, you cannot know if they’re wrong. 5: Language seems to have a lot of relativity and unspoken assumptions. If you say “Death is bad” you may mean “For a human, the idea of death is uncomfortable if it prevents them from something that they’re capable of doing”. Arguing against “Death is bad” is trivial, “If death didn’t exist, neither would the modern man, for we evolved through darwinism”. 6: People who know better always outnumber those who only have superficial understandings. Consensuses favor quantity over quality, but those who are ahead of the rest must necessarily be a minority who possess obscure knowledge which is difficult to communicate. I’d go as far as saying that getting average people involved in science was a mistake. 99% of people aren’t knowledable enough to understand the vaccine, so their position on the matter depends on the political bias of the authority they trust, which makes everything they have to say about the topic worthless. Except of couse statements like “The government stated X, now they’re staying Y, so they either lied in the past or they’re lying now” which is just basic logic
There no reason why you would need absolute certainty to make progress in fighting misinformation.
When the US government wanted to Facebook not to delete the misinformation they spread to get people in the Philippines to oppose the Chinese vaccine, they did not argue to Facebook that their misinformation was truthful.
If it’s not about truth value, then it’s not about misinformation. It’s more about manipulation and the harmfulness of certain information, no?
My point is about the imperfections//limitations of language. If I say “the vaccine is safe”, how safe does it have to be for my statement to be true? Is an one-in-a-million risk a proof by contradiction, or is it evidence of safety? Where’s the cut-off for ‘safety’?
I do think fighting “bad-faith manipulation” is doable at times, but I don’t think you can label anything as being true/false for certain.
Another point, which I should have mentioned earlier, is that removing false information can be harmful. Better to let it stay along with the counter-arguments which are posted, so that observers can read both sides and judge for themselves. Believing in something false is a human right. Imagine, for isntance, if believing (or not believing) in god was actually illegal
If you actually want to fight misinformation you need to to more than focusing on single claims. You actually need to speak about a domain of knowledge in a trustworthy way instead of just making claims for propaganda purposes.
A list of experiences of people with specific journalists doesn’t give you certainty about the habits of the journalists but it’s better than nothing. Additionally, it can pressure the journalists into behaving better because they don’t want to be shamed.
This doesn’t work, as you don’t know if the list (or its creators) are trustworthy. This is a smaller version of something which is an unsolvable problem (because you need an absolute reference point but only have relative reference points) An authority can keep an eye on everything under its control, but it cannot keep an eye on itself. “Who watches the watchers?”. This is why a ministry of truth is a bad idea and why misinformation is impossible to combat.
It’s tempting to say that openness of information is a solution (that if everyone can voice their opinions, observers can come to a sound conclusion themselves), and while this does end better, you don’t know if, for instance, a review site is deleting user reviews or not. (I just realized this is why people value transparency. But you don’t know if a seemingly transparent entity is actually transparent or just pretending to be. You can use technology which is fair or secure by design, but authorities (like the government) always make sure that this technology can’t exist.
The convenient thing about journalism is that the problems we’re worried about here are public, so you don’t need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it’s accused of.
The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers’ identifies you might still get a good enough signal to noise ratio, especially if you include positive ‘reviews’.
You can still lie by omission, allowing evidence that shows person A’s wrongdoings, while refuting evidence that shows either person A’s examples of trustworthiness, or person B’s wrongdoings.
If I do 10 things, 8 of which are virtuous and 2 of which are bad, and you only communicate the two to the world, then you will have deceived your listeners. Meanwhile, if another person does 8 things which are bad and 2 which are virtuous, you could share those two things. One-sidedness can be harmful and biased without ever lying (negative people tend to be in this group I think, especially if they’re intelligent)
A lot of online review sites are biased, despite essentially being designed to represent regular people rather than some authority which might lie to you. They silently delete reviews, selectively accuse reviews of breaking rules (holding a subset of them to a much higher standard, or claiming that reviews are targeted harassment by some socially unappealing group), adding fake votes themselves, etc.
We can’t solve all problems with journalism, but I hope we could at least solve the narrow problem of “I said X, journalist reported that I said Y”. (Such thing happened to me, too.)
With other problems, at least I can take some lesson about how to talk to journalists more carefully the next time. Perhaps I should shut up and refuse to comment on things where I don’t have a 100% certainty, because the journalist will make me sound 100% certain. Perhaps I shouldn’t provide a list of 8 good things and 2 bad things, because the journalist will only report the bad things; I should instead only mention the one or two most relevant things. Etc.
But if I say X and the journalist writes Y, there is nothing I can do to protect against this kind of problem (other than not talking to journalists at all).
Yes, so you could do this within an organization or a community where you generally trust the other members. To avoid even exaggeration or similar, the best way would be for the complainer to provide an exact quote of what they said, and an exact quote of what was reported.
Which would require recording the words you tell to the journalist, which is probably a good idea. (Check your local laws, whether you need to warn the journalist about this, or you can simply do it without them knowing.)
There is a worry about deepfakes, so even this would only work in a community where people trust each other.
If the journalist acts in good faith, I think you will be alright. If not, there’s nothing you can do, whatsoever.
Coming up with reasons is almost too easy:
1: The journalist can write an article about you even if you’ve never talked to them
2: A journalist can start out trustworthy and then change for the worse (most untrustworthy authorities today grew powerful by being trustworthy. Now that they’ve created their public image of impartiality and fairness, they can burn it for years. Examples include Google and Wikipedia)
3: If you record me saying “I wouldn’t say I’m very interested in cars”, you just cut out the first part of the video, and now you have me saying “I’m very interested in cars”. If I quote another person, “X said that Y people are bad”, you could cut out the part of me saying “Y people are bad”. The deeper and more complex a subject you can get me to talk about, the easier it would be to take me out of context. Making Jordan Peterson look bad is trivial for instance.
4: Even if you have evidence that your words were twisted, you’ll lose if your evidence can’t reach other people. So if your values don’t align with the average journalist, or if your reputation is bad, you might find yourself relying on getting the word out by having a social media post go viral or something.
Personally, if I see a journalist or website treating anyone unfairly, I make a mental check that they’re inherently untrustworthy. I’d contact such people only if they had a stake in releasing my story (so that our goals align). As you may imagine, my standards result in me not bothering with about 90% of society. I rarely attempt to solve problems like this, because I have solved them in the past and realized that the solution is actually unwanted (that many things are flawed on purpose, and not because they lack intelligent people to help them fix them)
Things would be better if society as a whole valued truthfulness, and if winning directly (rather than with underhanded tricks) was associated with higher social status. These are the upstream chances I’d like to see in the world
That’s wrong and ignores why journalists chose to write about people and the constraints under which journalists are operating.
For most people who are on LessWrong an who might be interviewed by journalists, they would be interviewed because they can be presented as an expert on a subject. If they aren’t talking to the journalist, the journalist will usually try to find another expert to talk to them.
Right, a constraint is power. This constraint is actually the most important. In case of a power imbalance though, there’s nothing the weaker party can really do but to rely on the good-will of the other party. It’s their choice how things work out, to the extent that the game board favors them.
If the journalist isn’t too powerful, and if they benefit from listening to you, and they’re not entirely obsessed about pushing a narrative which goes against your interests or knowledge, then things are favorable and more likely to turn out well.
My argument is that we can consider these things (power difference, alignment of views, the good/bad faith of the journalist in question, etc) as parameters, and that the outcome depends entirely on these parameters, and not on the things that we pretend to be important.
Is it for instance good advice to say “Word yourself carefully so that you cannot be misinterpreted?” For how much effort Jordan Peterson put into this, it didn’t do much to help his reputation.
Reputation, power and interests matter, they are the real factors. Things like honesty, truthfulness, competence and morality are the things that we pretend matter, and it’s even a rule that we must pretend they matter, as breaking the forth wall (as I’m doing here) is considered bad taste. But the pretend-game gets in the way of thinking clearly. And I think this “advice for journalists” post was submitted in the first place because somebody noticed that the game being played didn’t align with what it was “supposed” to be. The reason they noticed is because journalists aren’t putting much effort into their deception anymore, which is because the balance of power has been skewed so much
Your argument ignores the positions you are arguing with. Nobody here has a naive idea of what drives journalists.
Generally, some of the ideas here are still potentially useful, they just don’t get you any guarantees.
When I say “There’s nothing you can do about journalists screwing you over” I mean it like “There’s nothing you can do about the police screwing you over”. In 90% of cases, you probably won’t be screwed over, but the distribution of power makes it easy for them to make things difficult for you if they hate you enough. Another example is “Unprotected WIFI isn’t secure”, you can use McDonalds internet for your online banking for years without being hacked, so in practice you’re only a little insecure, but the statement “It’s insecure” just means “Whether or not you’re safe no longer depends on yourself, but on other peoples intentions”.
From this perspective, I’m warning against something which may not even happen. But it’s merely because a bad actor could exploit these attack vectors. I’m also speaking very generally, in a larger scope than just Lesswrong users talking to journalists. This probably adds to the feeling of our conversations being disconnected.
But I will have to disagree about nobody being naive. When two entities interact, and one of the entities is barely making an effort in pleasing the other party, it’s because of a difference in power. A small company may go out of its way to help you if you call its customer support line, whereas even getting in touch with a website like Facebook (unless its through the police) is genuinely hard.
The content says “Journalists exist to help us understand the world. But if you are a journalist, you have to be good enough to deserve the name” Which seems to mean “If you’re going to trade, you need to provide something of value yourself, like offering a service”. I think this is true for journalists as individuals, but not for companies which employ journalists. If these people won’t treat you with respect, it’s because they don’t have to, and arguing with them is entirely pointless, even if you’re right. Nothing but power will guarantee a difference, and if a journalist treats you kindly it’s probably because they have integrity (which is one of the forces capable of resisting Moloch).
Repeating myself a bit here, but hopefully made my position clearer in the process.
I mostly agree, just some nitpicking:
This is exactly an example where if you also record the conversation, and then write a short post saying “I said this …, he reported that …, listen for yourself here …”, this should make me dramatically lose credibility among anyone who knows you. (Plus a small chance of your article getting viral. Or at least anytime anyone mentions my name in the future, someone else can link your article in reply.)
Also, if e.g. everyone in the rationalist community started doing this, we could collectively keep one wiki page containing all of this. (A page with more examples is a more useful resource.) And every rationalist who doesn’t have previous experience with journalists could easily look up a name there.
But things like that happen all the time, and most things that people know about most topics are superficial, meaning that they’ve only heard the accusations, and that they’re only going to encounter the correction if they care to have a conversation about the topic. If the topic is politically biased, and these people spend time in politically biased communities, then it’s unlikely that anyone is going to show them the evidence that they’re wrong. You’re not incorrect, but think about the ratio of rationalists to non-rationalists. The reach of the media vs the amount of people who will bother to correct people who don’t know the full story.
It would also be easy for the website in question to say “You’re been accused of doing X, which is bad. We don’t tolerate bad behaviour on your platform” and ban you before you get to defend yourself. If the misunderstanding is bad enough, online websites can simply decide that even talking about you, or “defending you” is a sign of bad behaviour (I think this sort of happened to Kanye West because we had a manic episode in which he communicated things which are hard to understand and easy to misunderstand)
There’s a Wikipedia page on “Gamergate”, written largely by people who don’t know what happened. And there’s a “Gamergate Wiki” with tons of information (44 pages) with every detail documented in chronological order. I want to ask you two questions about this Wiki with the “other side of the story”:
1: Have you ever heard of it?
2: Can you even find it? (the only link I have myself is an archived page)
By coincidence, 1 yes, but 2 no. And yes, that is a good example of how one side of the debate was nuked from the entire internet, which many people would believe impossible.
(Could you please send me the link in a private message?)
Solutions do not have to be perfect to be useful. Trust can be built up over time.
If you take the US government who at the same time tells Facebook not to delete the anti-vaccine misinformation that the US government is spreading while telling Facebook to delete certain anti-vaccine misinformation that the US government doesn’t like, it’s obvious that the institutions aren’t trustworthy and thus they have a hard time fighting misinformation.
If the US government would stop lying, it would find it a lot easier to fight misinformation.
Outside of the government, nobody tries to fund an organization that values truth and that has a mission to fight misinformation with sufficient capital. All the organizations that have “fighting misinformation” on their banner are highly partisan and not centered around valuing truth.
The fact that nobody in the media asked Kamala Harris about whether Joe Biden was wrong to spread antivax misinformation as commander in chief, tell you a lot about how much the mainstream media cares about misinformation.
“Fighting misinformation” often means “Rejecting views which goes against ones own political narrative”. Scientists care the most about truth, and they aren’t afraid of challenging and questioning what is known. People heavily invested in politics don’t actually care all the much about truth, they just pretend to do so because it sounds noble. The “truthseeking” kind of person is a bit of a weirdo, not a lot of them exists.
It’s sad that the problem has gotten bad enough that even people on here have recognized it, but It’s nice not seeing comments which essentially say “Official sources are untrustworthy? That’s a bold claim. Give me evidence, from a source that I consider official and trustworthy, of course.”
But I really want to point out that it’s mathematically impossible to combat falsehood, and that it doesn’t matter if you call it “misinformation” or “disinformation”. The very approach fundamentally misunderstands how knowledge works.
1: Science is about refining our understanding, which means challenging it rather than attacking anyone who disagrees. It must be “open to modification” rather than “closed”.
2: It’s impossible to know if there’s any unknown unknowns that one is missing. Absolute certaincy does not seem to exist in knowledge.
3: Many things depend on definitions, which are arbitrary. “Is X a mental illness?” is decided not by X, but by a persons relationship to X.
4: Any conversation which has some intellectual weight is going to be difficult, and unless you can understand what the other person is saying, you cannot know if they’re wrong.
5: Language seems to have a lot of relativity and unspoken assumptions. If you say “Death is bad” you may mean “For a human, the idea of death is uncomfortable if it prevents them from something that they’re capable of doing”. Arguing against “Death is bad” is trivial, “If death didn’t exist, neither would the modern man, for we evolved through darwinism”.
6: People who know better always outnumber those who only have superficial understandings. Consensuses favor quantity over quality, but those who are ahead of the rest must necessarily be a minority who possess obscure knowledge which is difficult to communicate. I’d go as far as saying that getting average people involved in science was a mistake. 99% of people aren’t knowledable enough to understand the vaccine, so their position on the matter depends on the political bias of the authority they trust, which makes everything they have to say about the topic worthless. Except of couse statements like “The government stated X, now they’re staying Y, so they either lied in the past or they’re lying now” which is just basic logic
There no reason why you would need absolute certainty to make progress in fighting misinformation.
When the US government wanted to Facebook not to delete the misinformation they spread to get people in the Philippines to oppose the Chinese vaccine, they did not argue to Facebook that their misinformation was truthful.
If it’s not about truth value, then it’s not about misinformation. It’s more about manipulation and the harmfulness of certain information, no?
My point is about the imperfections//limitations of language. If I say “the vaccine is safe”, how safe does it have to be for my statement to be true? Is an one-in-a-million risk a proof by contradiction, or is it evidence of safety? Where’s the cut-off for ‘safety’?
I do think fighting “bad-faith manipulation” is doable at times, but I don’t think you can label anything as being true/false for certain.
Another point, which I should have mentioned earlier, is that removing false information can be harmful. Better to let it stay along with the counter-arguments which are posted, so that observers can read both sides and judge for themselves. Believing in something false is a human right. Imagine, for isntance, if believing (or not believing) in god was actually illegal
If you actually want to fight misinformation you need to to more than focusing on single claims. You actually need to speak about a domain of knowledge in a trustworthy way instead of just making claims for propaganda purposes.
A list of experiences of people with specific journalists doesn’t give you certainty about the habits of the journalists but it’s better than nothing. Additionally, it can pressure the journalists into behaving better because they don’t want to be shamed.