I just reread your post and have a couple more comments.
Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations, and I fear that continued discussion of the topic at the rate its’ currently discussed could lead to a schism. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they’re actually driving these people away!
Jill: Yes, truthseeking is very important. However, It’s clear that just choosing one value as sacred , and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you’ve pointed at a clear tension in our values as they’re currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.
I think it’s one thing to say that instrumentally the value of truth is maximized by placing some restrictions on people’s ability to express things (e.g. no repeating the same argument again and again, you have to be civil) and a very differentthing to treat to treat something like attracting people as a top-level value to be traded off against the value of truth.
My prediction [justification needed] is that if you allow appeals to “but that would be unpopular/drive people away” to be as important as “is it true/cause accurate updates?”, you will no longer be a place of truth-seeking, and politics will eat you, something, something. Even allowing questions “will it drive people away?” instrumentally for truth is dangerous, but perhaps safer if ultimately you’re judging by the impact on truth.
Sorry, I’ll work on explaining why I have that prediction. It seems sometimes once a model has become embedded deep enough, it gets difficult to express succinctly in words.
So I think the actual terminal goal for something like LW might be “uncover important intellectual truths.” It’s certainly not “say true things” or the site would be served by simply republishing the thesaurus over and over.
I think if you’re judging the impact on that value, then both “freedom of speech” and “not driving people away” begin to trade off against each other in important ways.
I think if you’re judging the impact on that value, then both “freedom of speech” and “not driving people away” begin to trade off against each other in important ways.
Yes, that I agree with, and I’m happy with that framing of it.
I suppose the actual terminal goal is a thing that ought to be clarified and agreed upon. The about page has:
To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.
But that’s pretty brief, doesn’t explicitly mention truth, and doesn’t distinguish between “uncover important intellectual truths” and “cause all its members to have maximally accurate maps” or something.
Elsewhere, I’ve talked at length about the goal of intellectual progress for LessWrong. That’s also unclear about what specific tradeoffs are implied when pursuing truths.
Important questions, probably the community should discuss them more. (I though my posting a draft of the new about page would spark this discussion, but it didn’t.)
I just reread your post and have a couple more comments.
I think it’s one thing to say that instrumentally the value of truth is maximized by placing some restrictions on people’s ability to express things (e.g. no repeating the same argument again and again, you have to be civil) and a very different thing to treat to treat something like attracting people as a top-level value to be traded off against the value of truth.
My prediction [justification needed] is that if you allow appeals to “but that would be unpopular/drive people away” to be as important as “is it true/cause accurate updates?”, you will no longer be a place of truth-seeking, and politics will eat you, something, something. Even allowing questions “will it drive people away?” instrumentally for truth is dangerous, but perhaps safer if ultimately you’re judging by the impact on truth.
Sorry, I’ll work on explaining why I have that prediction. It seems sometimes once a model has become embedded deep enough, it gets difficult to express succinctly in words.
So I think the actual terminal goal for something like LW might be “uncover important intellectual truths.” It’s certainly not “say true things” or the site would be served by simply republishing the thesaurus over and over.
I think if you’re judging the impact on that value, then both “freedom of speech” and “not driving people away” begin to trade off against each other in important ways.
Yes, that I agree with, and I’m happy with that framing of it.
I suppose the actual terminal goal is a thing that ought to be clarified and agreed upon. The about page has:
But that’s pretty brief, doesn’t explicitly mention truth, and doesn’t distinguish between “uncover important intellectual truths” and “cause all its members to have maximally accurate maps” or something.
Elsewhere, I’ve talked at length about the goal of intellectual progress for LessWrong. That’s also unclear about what specific tradeoffs are implied when pursuing truths.
Important questions, probably the community should discuss them more. (I though my posting a draft of the new about page would spark this discussion, but it didn’t.)