Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?
Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don’t push articles that have retractions, and they just now have added a “contested” flag that’s less informative than Wikipedia’s.
So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn’t this great power (showing you anything you want) come with great responsibility? Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?
I’m actually very familiar with freedom of speech and I’m getting more familiar with your dismissive and elitist tone.
Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn’t apply to the relationship between Facebook and users, as exemplified by their terms of use.
I’m not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.
But otherwise thanks for your reply, it’s stunning lack of detail gave me no insight whatsoever.
Freedom of speech applies, in the US, to the relationship between the government and the people
You seem to be mistaken about your familiarity with the freedom of speech. In particular, you’re confusing it with the 1st Amendment to the US Constitution. That’s a category error.
elitist tone
LOL. Would you assert that you represent the masses?
it’s stunning lack of detail gave me no insight whatsoever
A stunning example of narcissism :-P Hint: it’s not all about you and your lack of insight.
So are you going to actually explain why “freedom of speech” (not a negative right, but platform owners allowing users to post whatever they want) is a good thing?
Can you at least try to articulate why you believe this? When you make a statement like this with very few arguments, in response to a genuine question, it doesn’t matter if you feel the post you’re responding to is incredibly misguided or based on poor understanding. It’s simply condescending to respond this way. Now, as of my writing this comment, your response has 6 upvotes. For a forum with a lot of posts with zero votes, it’s pretty rare to have posts with this many upvotes, unless a lot of community members feel your response added a lot of light to the conversation. So if anyone is reading this who upvoted Lumifer’s post, can you explain why you felt it was worthy? This a pretty deep mystery for me on a forum where people who argued things in such depth, like Eliezer or Yvain, are usually held as people we should try to emulate.
Can you at least try to articulate why you believe this? When you make a statement like this with very few arguments, in response to a genuine question, it doesn’t matter if you feel the post you’re responding to is incredibly misguided or based on poor understanding. It’s simply condescending to respond this way.
No, I don’t think so. A short answer does not implicitly accuse the question of being stupid or misguided.
It was a simple direct question. I have a simple direct answer to it without much in the way of hedging or iterating through hands or anything like that.
If someone asks you “vanilla or chocolate?” and you’re a chocoholic, you answer with one word and not with a three-page essay on how and why your love for chocolate arose and developed.
Now your question of “why?” could easily lead to multiple pages but tl;dr would be that I like freedom, I don’t like the Ministry of Truth, and I think that power corrupts.
why you felt it was worthy?
I would offer a guess that the upvotes say “I agree” and not “this was the most insightful thing evah!” :-)
Lumifer didn’t say anything about enforceability. E.g. the boy scouts have the right (as a private group, if you accept that a group with the U.S. president as their figurehead is in fact private) to disallow membership based on gender, sexual orientation, or religion. That doesn’t mean it is right for them to do so. One should expect that in a civilized society groups like the boy scouts shouldn’t discriminate based on things like sexual orientation. But that doesn’t necessarily imply that there should be regulatory action to enforce that.
Likewise, Facebook should be a public commons where freedom of speech is respected. But that doesn’t mean I’d call for regulatory enforcement of that.
One should expect that in a civilized society groups like the boy scouts shouldn’t discriminate based on things like sexual orientation.
Agreed in principle, but there are certain situations where the boundaries are much less clear. Should I in a gentleman’s club allow women? Obviously not, and it’s not even discrimination.
Should I in Lesswrong allow the discussion of theology? Obviously not, and someone shouldn’t, in the normative sense, invoke freedom of speech to allow trolling.
At the same time, I can create a social network which is devoted to the dissemination of only carefully verified news, and no one should be able to invoke freedom of speech to hijack this mission.
Should I in Lesswrong allow the discussion of theology? Obviously not
LW discusses theology all the time, it just uses weird terminology and likes to reinvent the wheel a lot.
The whole FAI problem is better phrased as “We will create God, how do we make sure He likes us?”. The Simulation Hypothesis is straight-up creationism: we were created by some, presumably higher, beings for their purposes. Etc.
I see no meaningful difference between a god and a fully-realized (in the EY sense) AI. And the Simulation Hypothesis is literally creationism. Not necessarily Christian creationism (or any particular historic one), but creationism nonetheless.
Should I in Lesswrong allow the discussion of theology? Obviously not, and someone shouldn’t, in the normative sense, invoke freedom of speech to allow trolling.
I don’t think we have any ban on discussion on theology or that it was mentioned in any discussion we had about what might be valid reasons to ban a post.
Theology was just an example, but a relevant one: in a forum devoted to the improvement of rationality, discussing about some flavor of thoughts that have by long being proved irrational should amount to trolling. I’m not talking trying to justify rationally theism, that had and might still have a place here, but discussing theology as if theism was true shouldn’t be allowed. On the other hand, you cannot explicitly ban everything that is off-topic, so that isn’t written anywhere shouldn’t be a proof against.
On the other hand, you cannot explicitly ban everything that is off-topic, so that isn’t written anywhere shouldn’t be a proof against.
LW never used to have an explicit or implicit ban against being off-topic. Off-topic posts used to get downvoted and not banned.
We delete spam, we delete advocacy of illegal violence and the Basilisk got deleted under the idea that it’s a harmful idea.
An off-topic post about theism would be noise and not harmful, so it’s not worth banning under our philosophy for banning posts.
In addition, I don’t think that it’s even true that a post about theology has to be off-topic. It’s quite common on LW that people use replacement Gods like Omega for exploring thought experiments. Those discussions do pretend that “Omega existence is true” and that doesn’t make them problematic in any way.
Taking a more traditional God instead of Omega wouldn’t be a problem.
It’s also even clear that theism has been proved irrational. In the census a significant portion allocates more than 0 percent to it being true.
I think at the first Double Crux we did at LW Berlin someone updated in the direction of theism. A CFAR person did move to theism after an elaborate experiment of the Reverse Turing test.
LW likely wouldn’t have existed if it wouldn’t be for the philanthropic efforts of a certain Evangelical Christian.
David Chapman made in his posts about post-rationality the point that his investigation of religious ideas like Tantra allowed him to make advances in AI while at MIT that he likely otherwise wouldn’t have made.
That’s a weird position to have: basically you’re saying that there’s no moral way to limit the topic of or the accessibility to a closed group. Am I representing you correctly? If not, where would you put the boundaries?
Gentlemen clubs are actually concentrations of power where informal deals happen. Admitting women to these institutions is vital to having gender equality at the highest echelons of civic power.
And theology is discussed all the time on LW, even if it is often the subject of criticism.
I was just saying that those particular examples were poorly chosen. But since you have me engaged here, the problem with taking an absolutive view is when a private communication medium, e.g. Facebook, becomes a medium of debate over civic issues. Somewhere along the way it becomes a public commons vital to democracy where matters of free speech must be protected. In some countries (not the USA) this is arguably already the case with Facebook.
I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.
No, at least not yet. That’s a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don’t know the legal details, though, I could be completely wrong.
I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression,
You know, your campaign against fake news might be taken slightly more seriously if you didn’t immediately follow it up by asserting a piece of fake news as fact.
I’ve just been skimming the wiki page on Russian involvement in the US election.
SecureWorks stated that the actor group was operating from Russia on behalf of the Russian government with “moderate” confidence level
The other claims seem to just be that there was Russian propaganda. If propaganda and possible spying counts as “war” then we will always be at war, because there is always propaganda (as if the US doesn’t do the same thing!). The parallels with 1984 go without saying, but I really think that the risk of totalitarianism isn’t Trump, its people overreacting to Trump.
Also, there are similar allegations of corruption between Clinton and Saudi Arabia.
Facebook is full of bullshit because it is far quicker to share something then to fact-check it, not that anyone cares about facts anyway. A viral alarmist meme with no basis in truth will be shared more then a boring, balanced view that doesn’t go all out to fight the other tribe.
But Facebook has always been full of bullshit and no-one cared until after the US election when everyone decided to pin Trump’s victory on fake news. So its pretty clear that good epistemology is not the genuine concern here.
Not that I’m saying that Facebook is worse then any other social media—the problem isn’t Facebook, the problem is human nature.
“Arbiter of truth” is too big of a word. People easily forget two important things:
Facebook is a social media, emphasis on media: it allows the dissemination of content, it does not produce it;
Facebook is a private, for profit enterprise: it exists to generate a revenue, not to provide a service to citizens.
Force 1 obviously acts against any censoring or control besides what is strictly illegal, but force 2 pushes for the creation of an environment that is customer friendly. That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers.
People are silly if they delegate the responsibility of verifying the truth of a content to the transport layer, and the only reason that a flag button is present is because doing otherwise would lose customers. That said, to answer your question:
No, Facebook does not have any responsability beyond what is strictly illegal. That from power comes responsibility is a silly implication written in a comic book, but it’s not true in real life (it’s almost the opposite). As a general rule of life, do not acquire your facts from comics.
“That from power comes responsibility is a silly implication written in a comic book, but it’s not true in real life (it’s almost the opposite). ”
Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov’t/company and to the gov’t/company itself. The only kind of power I can think of that doesn’t come with some responsibility is gun ownership. Even Facebook’s power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.
You’re looking only at the walled garden of institutions inside a democracy. But if you look at past history, authoritarian governments or muddled legal situations (say some global corporations), you’ll find out that as long as the structure of power is kept intact, people in power can do pretty much as they please with little or no backlash.
Half of the US voted for Trump. If Facebook would make a move that would censor a lot of pro-Trump lies it risks losing a significant portion of it’s audience.
Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?
I’m not sure whether the function of verifying the quality of news articles is best fulfilled by a traditional social network. If I would care to solve the problem I would build a browser plugin that provides quality ratings of articles and websites. Users can vote and there’s a machine learning algorithm that translates the user votes into a good quality metric.
Let’s try to frame this with as little politics as possible...
You build a medium where people can exchange content. Your original goal is to make money, so you want to make it as popular as possible—in perfect case, the Schelling point for anyone debating anything.
But you notice that certain messages, optimized for virality, make a disproportional fraction of your content. You don’t like this… either because you realize you actually have values beyond “making money”… or because you realize that in long term this could have a negative impact on your medium if people start to associate it with low-quality viral messages—you aim to be a king of all content, not only yellow journalism. There is a risk your competitor would make a competing medium that it more pleasant to read, at least at the beginning, and gradually take over your readers.
Some quick ideas:
a) censor specific ideas a.1) completely, e.g. all kitten videos get deleted a.2) penalize kitten videos in content aggregation
Problem: This will get noticed, and people who love kitten videos will move to your competitors.
b) target virality itself b.1) make it more difficult to share content
This goes too strongly against your goal be being an addictive website for simpletons.
b.2) penalize mindless sharing
For example, you have one-click-sharing functionality, but you can optionally add your own comment. Shares with hand-written comments will get much higher priority than shares without ones. The easier to share, the faster to disappear.
b.3) penalize articles with too much shares (globally)
Your advantage, as a huge website, is that you know which articles are popular worldwide. Unfortuately, soon there will be SEO techniques to circumvent any action you take, such as showing the same content to different users under different URLs (or whatever will make your system believe it is different content.)
c) distributed “censorship”
You could make functionality of voluntary “content rating” or “content filtering”, where anyone can register as a rating/filtering authority, and people can voluntarily subscribe to them. The authorities will flag the content, and you can choose to either see the content flagged, or have it automatically removed. Important: make the user interface really simple (for the subscribers).
But I guess most people wouldn’t use this anyway.
d) allow different “profiles” or “channels” for users
Not sure about details, but suppose there are different channels for politics, kitten videos, programming, etc… and you can turn them on and off. Many people would not turn on the politics channel, making the political news less viral.
Potential problems, such as “JavaScript inventor fired for political donation” does belong under “programming” or “politics”? Who defines the ontology. Etc.
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be “facts”. Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be “facts”. Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
Not quite what I meant about identifying content but fair point.
As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it’s just narrative. While one side’s take may be “real” to half the world, the other side’s take can be “real” to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.
That means if you have an investigative reporter with non-public sources, that’s fake news because the other side has no access to his non-public sources?
Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?
Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don’t push articles that have retractions, and they just now have added a “contested” flag that’s less informative than Wikipedia’s.
So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn’t this great power (showing you anything you want) come with great responsibility? Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?
It’s a horrible idea.
No.
You’re confusing FB and Google (and a library, etc.)
I wouldn’t.
I recommend acquiring some familiarity with the concept of the freedom of speech.
I’m actually very familiar with freedom of speech and I’m getting more familiar with your dismissive and elitist tone.
Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn’t apply to the relationship between Facebook and users, as exemplified by their terms of use.
I’m not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.
But otherwise thanks for your reply, it’s stunning lack of detail gave me no insight whatsoever.
You seem to be mistaken about your familiarity with the freedom of speech. In particular, you’re confusing it with the 1st Amendment to the US Constitution. That’s a category error.
LOL. Would you assert that you represent the masses?
A stunning example of narcissism :-P Hint: it’s not all about you and your lack of insight.
So are you going to actually explain why “freedom of speech” (not a negative right, but platform owners allowing users to post whatever they want) is a good thing?
Sniff… sniff… smells like a bad-faith question. You don’t imagine you’re setting a trap for me or anything like that?
Can you at least try to articulate why you believe this? When you make a statement like this with very few arguments, in response to a genuine question, it doesn’t matter if you feel the post you’re responding to is incredibly misguided or based on poor understanding. It’s simply condescending to respond this way. Now, as of my writing this comment, your response has 6 upvotes. For a forum with a lot of posts with zero votes, it’s pretty rare to have posts with this many upvotes, unless a lot of community members feel your response added a lot of light to the conversation. So if anyone is reading this who upvoted Lumifer’s post, can you explain why you felt it was worthy? This a pretty deep mystery for me on a forum where people who argued things in such depth, like Eliezer or Yvain, are usually held as people we should try to emulate.
No, I don’t think so. A short answer does not implicitly accuse the question of being stupid or misguided.
It was a simple direct question. I have a simple direct answer to it without much in the way of hedging or iterating through hands or anything like that.
If someone asks you “vanilla or chocolate?” and you’re a chocoholic, you answer with one word and not with a three-page essay on how and why your love for chocolate arose and developed.
Now your question of “why?” could easily lead to multiple pages but tl;dr would be that I like freedom, I don’t like the Ministry of Truth, and I think that power corrupts.
I would offer a guess that the upvotes say “I agree” and not “this was the most insightful thing evah!” :-)
I don’t think that freedom of speech is enforceable inside a private-owned network.
We’re talking about “should”, the normative approach. A private entity can do a lot of things—it doesn’t mean that it should do these things.
Freedom of speech is not just a legal term, it’s also a very important component of a civil society.
Still: should Lesswrong allow the discussion of any off-topic subject just because “free speech”?
...did anyone claim anything like that?
You did, implicitly.
I did not. You read me wrong.
Lumifer didn’t say anything about enforceability. E.g. the boy scouts have the right (as a private group, if you accept that a group with the U.S. president as their figurehead is in fact private) to disallow membership based on gender, sexual orientation, or religion. That doesn’t mean it is right for them to do so. One should expect that in a civilized society groups like the boy scouts shouldn’t discriminate based on things like sexual orientation. But that doesn’t necessarily imply that there should be regulatory action to enforce that.
Likewise, Facebook should be a public commons where freedom of speech is respected. But that doesn’t mean I’d call for regulatory enforcement of that.
Agreed in principle, but there are certain situations where the boundaries are much less clear. Should I in a gentleman’s club allow women? Obviously not, and it’s not even discrimination.
Should I in Lesswrong allow the discussion of theology? Obviously not, and someone shouldn’t, in the normative sense, invoke freedom of speech to allow trolling.
At the same time, I can create a social network which is devoted to the dissemination of only carefully verified news, and no one should be able to invoke freedom of speech to hijack this mission.
LW discusses theology all the time, it just uses weird terminology and likes to reinvent the wheel a lot.
The whole FAI problem is better phrased as “We will create God, how do we make sure He likes us?”. The Simulation Hypothesis is straight-up creationism: we were created by some, presumably higher, beings for their purposes. Etc.
You are strawmanning both positions a lot...
No, I’m being quite literal here.
I see no meaningful difference between a god and a fully-realized (in the EY sense) AI. And the Simulation Hypothesis is literally creationism. Not necessarily Christian creationism (or any particular historic one), but creationism nonetheless.
Hell yeah, bro. Sufficiently advanced Superintelligence is indistinguishable from God.
I don’t think we have any ban on discussion on theology or that it was mentioned in any discussion we had about what might be valid reasons to ban a post.
Theology was just an example, but a relevant one: in a forum devoted to the improvement of rationality, discussing about some flavor of thoughts that have by long being proved irrational should amount to trolling. I’m not talking trying to justify rationally theism, that had and might still have a place here, but discussing theology as if theism was true shouldn’t be allowed.
On the other hand, you cannot explicitly ban everything that is off-topic, so that isn’t written anywhere shouldn’t be a proof against.
LW never used to have an explicit or implicit ban against being off-topic. Off-topic posts used to get downvoted and not banned.
We delete spam, we delete advocacy of illegal violence and the Basilisk got deleted under the idea that it’s a harmful idea.
An off-topic post about theism would be noise and not harmful, so it’s not worth banning under our philosophy for banning posts.
In addition, I don’t think that it’s even true that a post about theology has to be off-topic. It’s quite common on LW that people use replacement Gods like Omega for exploring thought experiments. Those discussions do pretend that “Omega existence is true” and that doesn’t make them problematic in any way. Taking a more traditional God instead of Omega wouldn’t be a problem.
It’s also even clear that theism has been proved irrational. In the census a significant portion allocates more than 0 percent to it being true. I think at the first Double Crux we did at LW Berlin someone updated in the direction of theism. A CFAR person did move to theism after an elaborate experiment of the Reverse Turing test. LW likely wouldn’t have existed if it wouldn’t be for the philanthropic efforts of a certain Evangelical Christian.
David Chapman made in his posts about post-rationality the point that his investigation of religious ideas like Tantra allowed him to make advances in AI while at MIT that he likely otherwise wouldn’t have made.
Actually neither of those are obvious to me.
That’s a weird position to have: basically you’re saying that there’s no moral way to limit the topic of or the accessibility to a closed group.
Am I representing you correctly? If not, where would you put the boundaries?
Those specific examples are bad examples.
Gentlemen clubs are actually concentrations of power where informal deals happen. Admitting women to these institutions is vital to having gender equality at the highest echelons of civic power.
And theology is discussed all the time on LW, even if it is often the subject of criticism.
I was just saying that those particular examples were poorly chosen. But since you have me engaged here, the problem with taking an absolutive view is when a private communication medium, e.g. Facebook, becomes a medium of debate over civic issues. Somewhere along the way it becomes a public commons vital to democracy where matters of free speech must be protected. In some countries (not the USA) this is arguably already the case with Facebook.
I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.
You shouldn’t uncritically ingest all the crap the media is feeding you. It’s bad for your health.
So we are at war with Russia? War serious enough to necessitate suspending the Constitution?
No, at least not yet. That’s a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don’t know the legal details, though, I could be completely wrong.
Facebook can filter the content, yes, but we’re not discussing the legalities, we’re discussing whether this is a good idea.
All of the information submitted to Wikileaks was real. Even if it came from Russia it was nothing to do with Fake News.
You know, your campaign against fake news might be taken slightly more seriously if you didn’t immediately follow it up by asserting a piece of fake news as fact.
I’ve just been skimming the wiki page on Russian involvement in the US election.
The other claims seem to just be that there was Russian propaganda. If propaganda and possible spying counts as “war” then we will always be at war, because there is always propaganda (as if the US doesn’t do the same thing!). The parallels with 1984 go without saying, but I really think that the risk of totalitarianism isn’t Trump, its people overreacting to Trump.
Also, there are similar allegations of corruption between Clinton and Saudi Arabia.
Facebook is full of bullshit because it is far quicker to share something then to fact-check it, not that anyone cares about facts anyway. A viral alarmist meme with no basis in truth will be shared more then a boring, balanced view that doesn’t go all out to fight the other tribe.
But Facebook has always been full of bullshit and no-one cared until after the US election when everyone decided to pin Trump’s victory on fake news. So its pretty clear that good epistemology is not the genuine concern here.
Not that I’m saying that Facebook is worse then any other social media—the problem isn’t Facebook, the problem is human nature.
“Arbiter of truth” is too big of a word.
People easily forget two important things:
Facebook is a social media, emphasis on media: it allows the dissemination of content, it does not produce it;
Facebook is a private, for profit enterprise: it exists to generate a revenue, not to provide a service to citizens.
Force 1 obviously acts against any censoring or control besides what is strictly illegal, but force 2 pushes for the creation of an environment that is customer friendly. That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers.
People are silly if they delegate the responsibility of verifying the truth of a content to the transport layer, and the only reason that a flag button is present is because doing otherwise would lose customers.
That said, to answer your question:
No, Facebook does not have any responsability beyond what is strictly illegal. That from power comes responsibility is a silly implication written in a comic book, but it’s not true in real life (it’s almost the opposite). As a general rule of life, do not acquire your facts from comics.
Since we’re talking about Facebook, it’s worth reminding that the customers are the advertisers. All y’all are just the product being sold.
Right, the chain has one more step, but still: if people start unsubscribing from Facebook, then money goes elsewhere and so does advertisers.
“That from power comes responsibility is a silly implication written in a comic book, but it’s not true in real life (it’s almost the opposite). ”
Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov’t/company and to the gov’t/company itself. The only kind of power I can think of that doesn’t come with some responsibility is gun ownership. Even Facebook’s power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.
You’re looking only at the walled garden of institutions inside a democracy. But if you look at past history, authoritarian governments or muddled legal situations (say some global corporations), you’ll find out that as long as the structure of power is kept intact, people in power can do pretty much as they please with little or no backlash.
Half of the US voted for Trump. If Facebook would make a move that would censor a lot of pro-Trump lies it risks losing a significant portion of it’s audience.
I’m not sure whether the function of verifying the quality of news articles is best fulfilled by a traditional social network. If I would care to solve the problem I would build a browser plugin that provides quality ratings of articles and websites. Users can vote and there’s a machine learning algorithm that translates the user votes into a good quality metric.
Let’s try to frame this with as little politics as possible...
You build a medium where people can exchange content. Your original goal is to make money, so you want to make it as popular as possible—in perfect case, the Schelling point for anyone debating anything.
But you notice that certain messages, optimized for virality, make a disproportional fraction of your content. You don’t like this… either because you realize you actually have values beyond “making money”… or because you realize that in long term this could have a negative impact on your medium if people start to associate it with low-quality viral messages—you aim to be a king of all content, not only yellow journalism. There is a risk your competitor would make a competing medium that it more pleasant to read, at least at the beginning, and gradually take over your readers.
Some quick ideas:
a) censor specific ideas
a.1) completely, e.g. all kitten videos get deleted
a.2) penalize kitten videos in content aggregation
Problem: This will get noticed, and people who love kitten videos will move to your competitors.
b) target virality itself
b.1) make it more difficult to share content
This goes too strongly against your goal be being an addictive website for simpletons.
b.2) penalize mindless sharing
For example, you have one-click-sharing functionality, but you can optionally add your own comment. Shares with hand-written comments will get much higher priority than shares without ones. The easier to share, the faster to disappear.
b.3) penalize articles with too much shares (globally)
Your advantage, as a huge website, is that you know which articles are popular worldwide. Unfortuately, soon there will be SEO techniques to circumvent any action you take, such as showing the same content to different users under different URLs (or whatever will make your system believe it is different content.)
c) distributed “censorship”
You could make functionality of voluntary “content rating” or “content filtering”, where anyone can register as a rating/filtering authority, and people can voluntarily subscribe to them. The authorities will flag the content, and you can choose to either see the content flagged, or have it automatically removed. Important: make the user interface really simple (for the subscribers).
But I guess most people wouldn’t use this anyway.
d) allow different “profiles” or “channels” for users
Not sure about details, but suppose there are different channels for politics, kitten videos, programming, etc… and you can turn them on and off. Many people would not turn on the politics channel, making the political news less viral.
Potential problems, such as “JavaScript inventor fired for political donation” does belong under “programming” or “politics”? Who defines the ontology. Etc.
Relevant: today’s discussion on HN of how Facebook shapes the feeds on its platform and what do various people think about it.
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be “facts”. Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be “facts”. Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
They certainly do identify content, and indeed alter the way that certain messages are promoted.
Example.
Who decides what is and is not fake news?
Not quite what I meant about identifying content but fair point.
As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it’s just narrative. While one side’s take may be “real” to half the world, the other side’s take can be “real” to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.
Verified by whom? There is a long history of “facts verified by official sources” turning out to be false.
That means if you have an investigative reporter with non-public sources, that’s fake news because the other side has no access to his non-public sources?