LessWrong should seriously consider implementing a “4chan mode” where a post and all its replies are anonymous and not counted for karma. I’m imagining that the poster could choose one of two options:
Anyone can reply
Only users with >K karma can reply (but this is enforced by some cryptographically-secure protocol to avoid linking accounts together, because otherwise nobody will trust it)
Why is this a good idea?
I submit: The more mature a system of ideas gets, the harder it becomes for non-anonymous discussions to yield any useful results. After 15-odd years, the “LessWrong rationality” idea-system is getting to that point, so having a way to post on LessWrong anonymously is increasingly important.
Let’s first step back and ask: Why does anyone bother sharing ideas at all? What benefits does one derive from doing so, that counterbalance the prima facie costs, namely:
(A) Communicating my ideas takes time and effort that I could spend elsewhere.
(B) Knowing things about the world gives me a strategic advantage over people who don’t know those things, which I lose if I reveal to them what I’m thinking.
?
These counterbalancing benefits include:
(1) Sharing my ideas can cause others to act in accordance with my values.
(2) If I gain a reputation for having good ideas, then people will give more credence to what I say in the future.
(3) I can more effectively improve my world-model by getting feedback from others.
(4) I can signal group membership by displaying familiarity with certain ideas.
(5) (There may be more but let’s leave it at that.)
For each of these negative and positive incentives, we can ask: Does this incentive work in favor of truth-telling (“Pro-truth”) or against it (“Anti-truth”)?
(A) is Pro-truth, because lying imposes more cognitive load than sincerity. You already have a world-model that you live by, but if you tell lies you also have to keep a model-of-your-lies on top of that.
(B) is Anti-truth, because by actively deceiving others, I can gain even more strategic advantage than if I had simply said nothing.
(1) is Pro-truth if our values align, Anti-truth if they don’t.
(2) is Pro-truth when the system is immature, Anti-truth when it’s mature.
(3) is Pro-truth, because I can’t get good feedback unless I honestly explain what I believe.
(4) is Anti-truth, because the more unbelievable an idea is, the more effective it is as a signal.
Why (1)? Because having true beliefs is instrumentally useful for almost any goal. If my audience consists of friends, I’ll want to give them as much true information as I can, so that they can be more effective at achieving their goals. But if my audience consists of enemies, I’ll want to give them false information, so that their efforts will be ineffective, or misdirected to promote my values instead of theirs. If the audience is a mix of friends and enemies, I’ll be reluctant to say anything at all. (This may explain OP’s reluctance to post, now that the readership of LessWrong has grown.)
Why (2)? When the system of ideas is immature, this can be a powerful Pro-truth motive—arguably this was why the Scientific Revolution got off the ground in the first place. But as the system matures, the more I need to differentiate my ideas from others’ in order to stand out from the crowd. If I simply repeat ideas that have already been said, that gains me no reputation, since there’s no evidence that I can come up with good ideas on my own (even if in fact I did). Instead I need to plant my flag far away from others. At the immature stage, this is easy. Later on, when the landscape grows thick with flags, the value of truth-seeking gets thrown under the bus in favor of making-a-name-for-myself—it becomes comparatively easier to just make up stuff. (Maybe the reason no one else has thought of New Idea X is because it’s not true!)
What can we do about this, if we want to promote truth-seeking? (A) and (3) are unaffected by external circumstances, so we can set them aside. (B) and (1) (which are really the same thing) can be mitigated by communicating in private walled gardens, as OP suggests, but that makes collaboration much more difficult.
This leaves us with (2) and (4). These incentives can be eliminated by forcing everyone to communicate only through anonymous, transient identities. When you’re using your real name, your reward is negative for truth-telling, and positive for lying. Being anonymous sets the reward to 0 in both cases, which eliminates the bad incentive; but nobody would choose to do this sua sponte, because they’d be leaving positive utility on the table. Therefore, anonymity must be imposed externally, by cultivating a norm whereby any statement will be disregarded unless it is anonymous. In other words, the positive utility of (1) must be diminished by more than I’d gain by using my real name.
Accordingly, I have posted this comment under a throwaway account, which I precommit to never reusing outside this thread.
LessWrong should seriously consider implementing a “4chan mode” where a post and all its replies are anonymous and not counted for karma. I’m imagining that the poster could choose one of two options:
Anyone can reply
Only users with >K karma can reply (but this is enforced by some cryptographically-secure protocol to avoid linking accounts together, because otherwise nobody will trust it)
Why is this a good idea?
I submit: The more mature a system of ideas gets, the harder it becomes for non-anonymous discussions to yield any useful results. After 15-odd years, the “LessWrong rationality” idea-system is getting to that point, so having a way to post on LessWrong anonymously is increasingly important.
Let’s first step back and ask: Why does anyone bother sharing ideas at all? What benefits does one derive from doing so, that counterbalance the prima facie costs, namely:
(A) Communicating my ideas takes time and effort that I could spend elsewhere.
(B) Knowing things about the world gives me a strategic advantage over people who don’t know those things, which I lose if I reveal to them what I’m thinking.
?
These counterbalancing benefits include:
(1) Sharing my ideas can cause others to act in accordance with my values.
(2) If I gain a reputation for having good ideas, then people will give more credence to what I say in the future.
(3) I can more effectively improve my world-model by getting feedback from others.
(4) I can signal group membership by displaying familiarity with certain ideas.
(5) (There may be more but let’s leave it at that.)
For each of these negative and positive incentives, we can ask: Does this incentive work in favor of truth-telling (“Pro-truth”) or against it (“Anti-truth”)?
(A) is Pro-truth, because lying imposes more cognitive load than sincerity. You already have a world-model that you live by, but if you tell lies you also have to keep a model-of-your-lies on top of that.
(B) is Anti-truth, because by actively deceiving others, I can gain even more strategic advantage than if I had simply said nothing.
(1) is Pro-truth if our values align, Anti-truth if they don’t.
(2) is Pro-truth when the system is immature, Anti-truth when it’s mature.
(3) is Pro-truth, because I can’t get good feedback unless I honestly explain what I believe.
(4) is Anti-truth, because the more unbelievable an idea is, the more effective it is as a signal.
Why (1)? Because having true beliefs is instrumentally useful for almost any goal. If my audience consists of friends, I’ll want to give them as much true information as I can, so that they can be more effective at achieving their goals. But if my audience consists of enemies, I’ll want to give them false information, so that their efforts will be ineffective, or misdirected to promote my values instead of theirs. If the audience is a mix of friends and enemies, I’ll be reluctant to say anything at all. (This may explain OP’s reluctance to post, now that the readership of LessWrong has grown.)
Why (2)? When the system of ideas is immature, this can be a powerful Pro-truth motive—arguably this was why the Scientific Revolution got off the ground in the first place. But as the system matures, the more I need to differentiate my ideas from others’ in order to stand out from the crowd. If I simply repeat ideas that have already been said, that gains me no reputation, since there’s no evidence that I can come up with good ideas on my own (even if in fact I did). Instead I need to plant my flag far away from others. At the immature stage, this is easy. Later on, when the landscape grows thick with flags, the value of truth-seeking gets thrown under the bus in favor of making-a-name-for-myself—it becomes comparatively easier to just make up stuff. (Maybe the reason no one else has thought of New Idea X is because it’s not true!)
What can we do about this, if we want to promote truth-seeking? (A) and (3) are unaffected by external circumstances, so we can set them aside. (B) and (1) (which are really the same thing) can be mitigated by communicating in private walled gardens, as OP suggests, but that makes collaboration much more difficult.
This leaves us with (2) and (4). These incentives can be eliminated by forcing everyone to communicate only through anonymous, transient identities. When you’re using your real name, your reward is negative for truth-telling, and positive for lying. Being anonymous sets the reward to 0 in both cases, which eliminates the bad incentive; but nobody would choose to do this sua sponte, because they’d be leaving positive utility on the table. Therefore, anonymity must be imposed externally, by cultivating a norm whereby any statement will be disregarded unless it is anonymous. In other words, the positive utility of (1) must be diminished by more than I’d gain by using my real name.
Accordingly, I have posted this comment under a throwaway account, which I precommit to never reusing outside this thread.