But LessWrong is a) about figuring out what’s true/false and right/wrong, so this is a valuable domain of practice, and b) is, both in its mission and in the makeup of its membership, less likely to have problems in that domain.
I have come across similar arguments for why discussing politics on LW is worthwhile, and I didn’t find them convincing then. (It is also the case that politics is sort of about figuring out what’s true/false and right/wrong, and definitely the case that LW is less likely to have problems in that domain.) In order to establish that it’s actually worth it, it seems like you need to actually estimate the value and the cost, and it’s not obvious to me that we’re seeing the same costs. For example, one of the non-obvious costs of talking about politics on LW is that you attract people who are relatively more interested in politics than rationality, corroding the culture even if talking about politics actually leveled up the rationality of all of the previous users.
It does seem obvious to me that developing the skill to correctly assess whether a criticism is “wrong” is more valuable than developing the skill to correctly reason about political issues, but it’s not obvious to me that it’s more valuable than the varied costs to the community if this can always be a live point of discussion.
But I think it should absolutely be a target of this community, that it does not matter whose mouth the true words or the valid questions are coming out of. If a thing is true, or a question is pointing at real uncertainty, then anyone should be able to say/ask it.
(For context, Duncan and I have talked about this some in person, but didn’t really finish the conversation.) I still think this doesn’t engage with my point, which is that reading sentences is only indirectly a function from utterance to meaning. In order to determine the meaning of a sentence, I’m implicitly modeling the prior probability of many different meanings, the likelihood of many meaning → utterance mappings, and determining which meanings are most plausible given the utterances I read (or didn’t read). And it’s definitely the case that both the prior distribution and the likelihood distributions depend on whether the speaker is ‘first party’ or ‘second party’ or ‘third party’. On a trivial level, whether someone uses the word “I” or “Vaniver” depends a lot on whether they’re me or not me, but on a less trivial level, while both sentences “I am fair” and “Vaniver is fair” are semantically equivalent (if said by me), what you can infer about the world seems very different depending on whether I’m saying the first one or a third party is saying the second one.
I hear you as pushing for a world where you can write “I am fair” sentences and have them be evaluated identically to as if I wrote “Duncan is fair,” and I think that’s undesirable to the limited extent that it is possible.
---
I do think that it should be possible to write “I am fair” sentences, since sometimes they are relevant to a conversation and the best way forward, but it’s not obvious to me that the current cost to writing such sentences is incorrect.
I have come across similar arguments for why discussing politics on LW is worthwhile, and I didn’t find them convincing then. (It is also the case that politics is sort of about figuring out what’s true/false and right/wrong, and definitely the case that LW is less likely to have problems in that domain.) In order to establish that it’s actually worth it, it seems like you need to actually estimate the value and the cost, and it’s not obvious to me that we’re seeing the same costs. For example, one of the non-obvious costs of talking about politics on LW is that you attract people who are relatively more interested in politics than rationality, corroding the culture even if talking about politics actually leveled up the rationality of all of the previous users.
It does seem obvious to me that developing the skill to correctly assess whether a criticism is “wrong” is more valuable than developing the skill to correctly reason about political issues, but it’s not obvious to me that it’s more valuable than the varied costs to the community if this can always be a live point of discussion.
(For context, Duncan and I have talked about this some in person, but didn’t really finish the conversation.) I still think this doesn’t engage with my point, which is that reading sentences is only indirectly a function from utterance to meaning. In order to determine the meaning of a sentence, I’m implicitly modeling the prior probability of many different meanings, the likelihood of many meaning → utterance mappings, and determining which meanings are most plausible given the utterances I read (or didn’t read). And it’s definitely the case that both the prior distribution and the likelihood distributions depend on whether the speaker is ‘first party’ or ‘second party’ or ‘third party’. On a trivial level, whether someone uses the word “I” or “Vaniver” depends a lot on whether they’re me or not me, but on a less trivial level, while both sentences “I am fair” and “Vaniver is fair” are semantically equivalent (if said by me), what you can infer about the world seems very different depending on whether I’m saying the first one or a third party is saying the second one.
I hear you as pushing for a world where you can write “I am fair” sentences and have them be evaluated identically to as if I wrote “Duncan is fair,” and I think that’s undesirable to the limited extent that it is possible.
---
I do think that it should be possible to write “I am fair” sentences, since sometimes they are relevant to a conversation and the best way forward, but it’s not obvious to me that the current cost to writing such sentences is incorrect.