If the problem is that our best authors went elsewhere, would it not be a good idea for fans to take their best writing and re-post it here for them? I mean, if they’d actually prefer that not to happen, then ok. But are we sure about that?
What were their stated reasons for leaving? What were their real reasons?
What were their stated reasons for leaving? What were their real reasons?
Negativity in the discussion was mentioned. Not sure how important this is compared with other reasons.
Also, some people post both LW-type content and non-LW-type content. The latter does not belong to LW, so they create a separate blog. When the blog attracts its own community of readers, they may prefer to also post the LW-type content here, especially when the boundaries are not clear. (Some of them do repost the LW-type content here afterwards.)
In my opinion, the essence of the problem is that people instinctively play status games all the time. Even when they say that would prefer to do something else instead. It is hard to abandon the game, when even “saying that you would prefer to stop playing the game” can be used as a successful move within the game. Actually, denying that you are playing the game is almost a requirement in most situations; and accusing other people that they are playing the game is an attack move within the game. The game goes on automatically; whatever you do, you get or lose a few points, and other people see it. If you say “I am not playing the game”, but other people see you winning points, and they also want a few points for themselves.
And then, we have the instinct that status is connected with various things, especially with the ability to hurt other people and defend yourself successfully from being hurt. Oh, we are civilized people, so in most situations we avoid the worst forms of violence, but in every situation there is a permissible range: maybe only verbal attacks, maybe only passive aggressive behavior, but some of us are very good at using what we can. Seeing that someone gained too many points, without the ability to defend themselves and attack their enemies, provokes an attack. Not necessarily from someone who wants to replace the target, but simply from someone who feels that the difference of points between them and the target has become disproportionally large compared with their own estimate of how it should be.
How it looks from outside (among civilized people who wouldn’t admit playing the game) is illustrated here. Essentially, whenever you do something that is “too good” (something that brings you much more points than you “should have” according to your perceived ability to attack and defend yourself), many people will feel the urge to criticize you and your work, to alleviate the difference. From inside, I guess they will either convince themselves that the work is actually not good, or imagine some dangerous things you are totally going to do with your newly gained points (and see themselves as heroes who prevented this danger), or simply deny that they are attacking you.
This can be very exhausting to a person who wants to focus on creating good content, but doesn’t want to spend their time defending themselves from attacks. The usual reaction is that the person stops producing the good content, and the status balance is maintained. Which is quite bad for us, who want to consume the good content.
Another option is to retreat to a fortress, where the defense is much easier. Such as Facebook, where you can block the attackers in a few seconds, and they usually won’t create another account only to bother you (and even if they do, you can still set your messages visible to only your friends). If you are willing to solve the related technical problems, you can use your own blog.
So, the question is: can we do anything to prevent good authors from having to retreat to their own fortresses (or not writing / not publishing anymore) after they gain “too much” points for doing what we want them to do? What kind of platform would achieve that?
There is a standard solution, and most people call it “censorship”. You create a place where the authors can publish, and where all attacks are removed. Preferably by a third-party moderator, so the authors don’t even see them, and don’t have to waste their own time deleting them.
I can imagine how most people would react to this proposal. No, we can’t remove all negative feedback; we need to have a way how to tell genuinely bad authors that their work honestly sucks! Otherwise the stupidity will prevail! Sure… but the whole problem is that we are running on a corrupted hardware, so when the situation comes and our status-regulation emotion kicks in, we will start believing that the author is genuinely bad, the work genuinely sucks, and there is a very real and very urgent danger of genuinely horrible things happening unless the author is provided negative feedback as strongly as possible. :(
(“Oh no, Eliezer has an opinion on quantum physics that only a few experts agree with, but other experts disagree! And he believes that Bayes’ Theorem is super important, and the Bayes’ Theorem really is important, but isn’t as much imporant as he believes! And he once deleted Roko’s Basilisk and provided a totally unsatisfying PR explanation! And he asks people to send him money! And he has multiple girlfriends! This is totally a cult, worse than scientology! They are going to spread wrong interpretations of quantum physics and then they will commit mass suicide! Someone think of the children! Don’t read the Sequences! Don’t read HPMoR! Tell everyone, and warn them about the danger! Write an article on RationalWiki, and Wikipedia, and your local news, and contact all skeptical organizations you know, and post on Facebook and Reddit! Someone stop this dangerous guy from having too much status!”)
The proposal of “censorship” is value-neutral. There are authors who should be attacked; there are authors who shouldn’t be; the proposed mechanism protects both equally. Making a mechanism that protects that and only that which should be protected is a FAI-complete problem. At some moment a human judgement has to be applied. At that moment, you should expect the known psychological forces to manifest.
Another option is to remove debates completely; then you avoid the accusations of censorship, but you also lose the potentially good comments. Sure, the people will comment on a different website, but that’s okay—such comments aren’t linked to the criticized article as strongly as the comments directly below the article would be. (And you cannot prevent comments on a third-party website anyway.) Publishing a book is one way to do this; no one can write their comment into all copies of your book.
Yet another option is to make attacking costly: for example, you would be allowed to publish a critique of an article, but that critique itself would have to be a well-written article (preferably explaining and supporting their own position, not merely saying “X is wrong”, so that they are now equally exposed to an attack) and have to be accepted by editors. Of course the editors are going to be accused of partiality; that’s inevitable. (Replace the editors by a popular vote, then we need someone to decide who is an eligible voter, and we still have the status-regulation emotion urging people to upvote a critique that doesn’t fulfill the criteria but is well-deserved anyway.)
So, the question is: can we do anything to prevent good authors from having to retreat to their own fortresses (or not writing / not publishing anymore) after they gain “too much” points for doing what we want them to do? What kind of platform would achieve that?
My feeling is that what Medium is aiming at is to accomplish the vision of Vannevar Bush: hypertext done properly, in a way so that the community dynamics and the financial dynamics work, and reinforce the good parts of many-to-many hypertext rather than the bad parts, and avoiding the tearing-itself-apart in the many, many different ways we have seen over the past two decades...
Could you describe how specifically the commenting works on medium.com? Because that seems to me like an important part where you just can’t make everyone happy, because some people want mutually contradictory things (such as “to filter unwanted comments” vs “not to be filtered”).
Commenting is actually one of the most interesting parts of Medium. It’s surprisingly similar to a combination of your “removing debates” and “making attacking costly”—you can reply to a post on Medium, and your reply is itself a post on your own Medium, with a metadata tag linking it to the post you’re replying to. People will generally not see your reply underneath the original post, but they will see an ‘other replies’ button they can click which will reveal it. But people can recommend your post; if your post is recommended by (1) the original post author, (2) Medium staff (I think?), or (3) someone I follow, then I will automatically see it under the original post like a ‘comment’, above the ‘show other replies’ button.
Wow, I’m impressed! This is pretty close to how I imagined it, and it also seems simple enough for everyone to understand.
Essentially, by default you only see content recommended by someone you care about (i.e. in long term you care about the people you follow; and in short term you care about the person whose article you are reading right now). So people cannot insert themselves into debates forcefully.
I’m trying to imagine how Facebook would look like if they switched to this system (using the existing “like” button as the sign of approval). So when you post something on your wall, the comments you “liked” are displayed to all readers; the comments to didn’t like are displayed only to friends of the person who posted them, and you are not allowed to remove any comment.
Sounds reasonable, assuming there is a visible difference between “the comments I didn’t approve because I don’t want to approve them” (e.g. the “hide” button), and “the comments I haven’t approved because I haven’t seen them yet”.
The only possible form of “spamming” here is to annoy someone by posting many replies to their articles, and even then you are only annoying them privately. (There should be a way to block a user, that is “auto-hide” all their replies, so the only possible way of “spamming” would be posting many replies with many sockpuppets. This would take the usual attacker much more time than the attacked person.)
Maybe the disadvantage is that it kills the “linear debate of trivial comments”; the type of discussion where everyone only types a line or two, which best resembles how people chat, but maybe that’s good. People who want to chat without writing an article-length reply might miss this feature.
So I guess my perfect system would be a combination of the Medium way, plus old-style linear discussion below the article, where all replies are invisible until approved by the author (optionally, the author could switch it to “auto-approve” with possibility to delete anything afterwards). Or, to make it more unified, every reply would start as a comment below the article, but you would have the checkbox “also show this reply on my homepage as an article”. All approved replies would be displayed below the article, but replies longer than three lines (that includes full articles) would be shortened until you click to expand them.
If the problem is that our best authors went elsewhere, would it not be a good idea for fans to take their best writing and re-post it here for them? I mean, if they’d actually prefer that not to happen, then ok. But are we sure about that?
What were their stated reasons for leaving? What were their real reasons?
Negativity in the discussion was mentioned. Not sure how important this is compared with other reasons.
Also, some people post both LW-type content and non-LW-type content. The latter does not belong to LW, so they create a separate blog. When the blog attracts its own community of readers, they may prefer to also post the LW-type content here, especially when the boundaries are not clear. (Some of them do repost the LW-type content here afterwards.)
In my opinion, the essence of the problem is that people instinctively play status games all the time. Even when they say that would prefer to do something else instead. It is hard to abandon the game, when even “saying that you would prefer to stop playing the game” can be used as a successful move within the game. Actually, denying that you are playing the game is almost a requirement in most situations; and accusing other people that they are playing the game is an attack move within the game. The game goes on automatically; whatever you do, you get or lose a few points, and other people see it. If you say “I am not playing the game”, but other people see you winning points, and they also want a few points for themselves.
And then, we have the instinct that status is connected with various things, especially with the ability to hurt other people and defend yourself successfully from being hurt. Oh, we are civilized people, so in most situations we avoid the worst forms of violence, but in every situation there is a permissible range: maybe only verbal attacks, maybe only passive aggressive behavior, but some of us are very good at using what we can. Seeing that someone gained too many points, without the ability to defend themselves and attack their enemies, provokes an attack. Not necessarily from someone who wants to replace the target, but simply from someone who feels that the difference of points between them and the target has become disproportionally large compared with their own estimate of how it should be.
How it looks from outside (among civilized people who wouldn’t admit playing the game) is illustrated here. Essentially, whenever you do something that is “too good” (something that brings you much more points than you “should have” according to your perceived ability to attack and defend yourself), many people will feel the urge to criticize you and your work, to alleviate the difference. From inside, I guess they will either convince themselves that the work is actually not good, or imagine some dangerous things you are totally going to do with your newly gained points (and see themselves as heroes who prevented this danger), or simply deny that they are attacking you.
This can be very exhausting to a person who wants to focus on creating good content, but doesn’t want to spend their time defending themselves from attacks. The usual reaction is that the person stops producing the good content, and the status balance is maintained. Which is quite bad for us, who want to consume the good content.
Another option is to retreat to a fortress, where the defense is much easier. Such as Facebook, where you can block the attackers in a few seconds, and they usually won’t create another account only to bother you (and even if they do, you can still set your messages visible to only your friends). If you are willing to solve the related technical problems, you can use your own blog.
So, the question is: can we do anything to prevent good authors from having to retreat to their own fortresses (or not writing / not publishing anymore) after they gain “too much” points for doing what we want them to do? What kind of platform would achieve that?
There is a standard solution, and most people call it “censorship”. You create a place where the authors can publish, and where all attacks are removed. Preferably by a third-party moderator, so the authors don’t even see them, and don’t have to waste their own time deleting them.
I can imagine how most people would react to this proposal. No, we can’t remove all negative feedback; we need to have a way how to tell genuinely bad authors that their work honestly sucks! Otherwise the stupidity will prevail! Sure… but the whole problem is that we are running on a corrupted hardware, so when the situation comes and our status-regulation emotion kicks in, we will start believing that the author is genuinely bad, the work genuinely sucks, and there is a very real and very urgent danger of genuinely horrible things happening unless the author is provided negative feedback as strongly as possible. :(
(“Oh no, Eliezer has an opinion on quantum physics that only a few experts agree with, but other experts disagree! And he believes that Bayes’ Theorem is super important, and the Bayes’ Theorem really is important, but isn’t as much imporant as he believes! And he once deleted Roko’s Basilisk and provided a totally unsatisfying PR explanation! And he asks people to send him money! And he has multiple girlfriends! This is totally a cult, worse than scientology! They are going to spread wrong interpretations of quantum physics and then they will commit mass suicide! Someone think of the children! Don’t read the Sequences! Don’t read HPMoR! Tell everyone, and warn them about the danger! Write an article on RationalWiki, and Wikipedia, and your local news, and contact all skeptical organizations you know, and post on Facebook and Reddit! Someone stop this dangerous guy from having too much status!”)
The proposal of “censorship” is value-neutral. There are authors who should be attacked; there are authors who shouldn’t be; the proposed mechanism protects both equally. Making a mechanism that protects that and only that which should be protected is a FAI-complete problem. At some moment a human judgement has to be applied. At that moment, you should expect the known psychological forces to manifest.
Another option is to remove debates completely; then you avoid the accusations of censorship, but you also lose the potentially good comments. Sure, the people will comment on a different website, but that’s okay—such comments aren’t linked to the criticized article as strongly as the comments directly below the article would be. (And you cannot prevent comments on a third-party website anyway.) Publishing a book is one way to do this; no one can write their comment into all copies of your book.
Yet another option is to make attacking costly: for example, you would be allowed to publish a critique of an article, but that critique itself would have to be a well-written article (preferably explaining and supporting their own position, not merely saying “X is wrong”, so that they are now equally exposed to an attack) and have to be accepted by editors. Of course the editors are going to be accused of partiality; that’s inevitable. (Replace the editors by a popular vote, then we need someone to decide who is an eligible voter, and we still have the status-regulation emotion urging people to upvote a critique that doesn’t fulfill the criteria but is well-deserved anyway.)
One serious, business answer is medium.com
Here is a look at what they are trying to do. Sample:
Could you describe how specifically the commenting works on medium.com? Because that seems to me like an important part where you just can’t make everyone happy, because some people want mutually contradictory things (such as “to filter unwanted comments” vs “not to be filtered”).
Commenting is actually one of the most interesting parts of Medium. It’s surprisingly similar to a combination of your “removing debates” and “making attacking costly”—you can reply to a post on Medium, and your reply is itself a post on your own Medium, with a metadata tag linking it to the post you’re replying to. People will generally not see your reply underneath the original post, but they will see an ‘other replies’ button they can click which will reveal it. But people can recommend your post; if your post is recommended by (1) the original post author, (2) Medium staff (I think?), or (3) someone I follow, then I will automatically see it under the original post like a ‘comment’, above the ‘show other replies’ button.
Wow, I’m impressed! This is pretty close to how I imagined it, and it also seems simple enough for everyone to understand.
Essentially, by default you only see content recommended by someone you care about (i.e. in long term you care about the people you follow; and in short term you care about the person whose article you are reading right now). So people cannot insert themselves into debates forcefully.
I’m trying to imagine how Facebook would look like if they switched to this system (using the existing “like” button as the sign of approval). So when you post something on your wall, the comments you “liked” are displayed to all readers; the comments to didn’t like are displayed only to friends of the person who posted them, and you are not allowed to remove any comment.
Sounds reasonable, assuming there is a visible difference between “the comments I didn’t approve because I don’t want to approve them” (e.g. the “hide” button), and “the comments I haven’t approved because I haven’t seen them yet”.
The only possible form of “spamming” here is to annoy someone by posting many replies to their articles, and even then you are only annoying them privately. (There should be a way to block a user, that is “auto-hide” all their replies, so the only possible way of “spamming” would be posting many replies with many sockpuppets. This would take the usual attacker much more time than the attacked person.)
Maybe the disadvantage is that it kills the “linear debate of trivial comments”; the type of discussion where everyone only types a line or two, which best resembles how people chat, but maybe that’s good. People who want to chat without writing an article-length reply might miss this feature.
So I guess my perfect system would be a combination of the Medium way, plus old-style linear discussion below the article, where all replies are invisible until approved by the author (optionally, the author could switch it to “auto-approve” with possibility to delete anything afterwards). Or, to make it more unified, every reply would start as a comment below the article, but you would have the checkbox “also show this reply on my homepage as an article”. All approved replies would be displayed below the article, but replies longer than three lines (that includes full articles) would be shortened until you click to expand them.
I don’t play there so I don’t know—but it’s an open website, you can go take a look any time...
“Only a few” are as committed to it as Eliezer is, but many many more consider it at least somewhat plausible.
I think the word you’re looking for is “moderation”.
It’s one of those flexible words: I keep the discussion polite; you moderate; he censors.
They are usually called “irregular verbs” :-)
If I remember right, the most recent survey asked those exact questions. So we may well find out.